Tuesday, May 18, 2010

[Facts of life]

[Facts of life] The best managers I've ever had were the ones not studying for it not wanting it or maneuvering for it but came to it by natural progression or accident

Wednesday, March 31, 2010

Canais TV em Portugal

Editar 10 canais não é o mesmo que editar 1 canal 10 vezes (canais tugas sao do ultimo grupo). 
E quase nada chegou pro mes k vem (amanha) ou pra semana mas está tudo mto entretido a mudar a grelha pela enesima vez a grelha que ja acabou.
Ando a receber um treino fantastico de realidade tuguesa, pra qual defacto ja estava muito desabituado.
Que vergonha PT! Ate os canais africanos mandam tudo a tempo e horas e sem enganos e constantes mexidas!
Não é de facto de espantar que os unicos stressados da empresa sejam os que lidam com os canais portugueses.

Sunday, March 21, 2010

OV chipcard (part......)

& a few weeks after instaed of receiving my ov-chipcard i receive a (rude) letter asking to send them a copy of a "proper" passport and foto. Hey Narrow minded morons! I sent already the photo and a copy of my passport (EU valid) I dont need a dutch one. Breathe in... make another copy of the same and yet another photo!
Nice to know they loose official documents.... very reassuring!!!!

Wednesday, March 17, 2010

Rants


What's the logic of start to work @ 8:30 (or the clever colleagues @ 8 or 7) if we're working with people in PT witch means 10h local time (@best)... oh yeah so there's a poor bastard that's always stuck at the end of the day because the bitches leave on time no matter what is left undone & urgent!

If this continues i'm going to start to have the period or maternity leaves and extra 45 min(x2 a day) breaks (plus the normal breaks) to take out the milk too... god knows i'm feeling PMS already lolll

Thursday, January 21, 2010

Friday, December 04, 2009

Natal 2009!


Vou desde já avisando que o Pai natal não faz a ronda este ano.
O Pai Natal foi despedido, ou melhor o contrato não foi renovado, pelo concelho de administração e está na fila do desemprego a tentar obter fundos de inserção social que lhe permita resgatar o trenó do prego ou pelo menos para pagar os salários em atraso dos duendes que estão em lay off.
Os duendes por sua vez estão sem nada para fazer porque o mesmo concelho de administração fez um offshore de toda a produção e serviços para China subsidiando assim os seus salários e os bónus, sim porque só o deles é que tem de ser equiparado á Europa (mesmo que a empresa nao venda nada porque os normais clientes estão falidos e os Chineses nao compram dessas merdas).

O coitado do Pai Natal ainda pensou em se reformar (antecipadamente) mas dissera-lhe que isso é que era bom, que isso do antecipadamente ja deu o que tinha a dar e que (apesar de ter  500 anos de contribuições feitas) devido ao aumento da esperança de vida ainda lhe faltam mais uns centênios e que afinal não tem direito a ajudas sociais por ter bens (casa onde dormir) e como tal é melhor se candidatar a uma vaga temporária numa loja de brinquedos chineses no Freeport onde claro tem de pagar o seu proprio transporte de ida e volta todos os dias mas que pelo menos ainda lhe pagam um seguro de saude que comparticipa aspirinas. O coitado ainda tentou contactar o conselho de administraçao a oferecer-se a trabalhar por metade do valor ... mas sabe-se la porquê... todos trocaram de telefones, coincidentemente, ao mesmo tempo.

Os duendes sem direito ao fundo de desemprego (lay off) esses sao bombardiados a todo minuto por campanhas agressivas de (tele)marketing de vendas de creditos faceis e instantaneos porque é preciso que eles se endividem (ainda mais) para comprar, os mesmos brinquedos Chineses, porque é preciso estimular a economia (da China)....Juntaram-se no que é ja o conhecido pelo nome "Gangue do polo norte" dedicando-se a assaltar as chaminés mais desprotegidas, Ganhando o premio de inovaçao empresarial.
As renas (foram postas num laboratório farmaceutico a testar vacinas... (o nariz vermelho do Rudolfo) para custear o credito bancário que servio para comprar a palha transgênica (devem pensar que eles voam com palha natural não).

A mãe natal essa ficou em casa de gripe e vassoura na mão a impedir a entrada do seu chalé no Pólo norte (herança de familia) dos assistentes sociais e mais os credores que querem fazer a avaliação do espolio, uma vez que com o aquecimento global a casa em si não a consegue vender por se encontrar em zona de risco tendo-lhe sido ja retirada a licença de habitação... isto mesmo que conseguisse achar um banqueiro que quisesse uma chalé para caçar ursos polares e que tivesse disponível a dar o suficiente para poderem alugar um T0 algures na charneca da Caparica ou quiça uma tendinha no parque da inatel.... (e por ai adiante)


Wednesday, November 25, 2009

the tomtom "bingo" meetings lol

Just perfect!
I'll never be able to attend a SWE meeting without having this on my mind
Taken from here


Tuesday, October 06, 2009

very good article on agile...

Very good article, it sums up my feelings about development and testing... too much bullshit surrounding it. Had to repost this here ... for sure this should gain me tons of friends in the bullshit community lol.


http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html


And dont even get me started on some courses/trainings and testing certifications going around.

Monday, October 05, 2009

Se isto continua...

Este blog ainda vai passar a ser... João na terra do Tralala

Thursday, September 03, 2009

TomTom Reveals New TomTom GO x50 LIVE Series

I've tested that PND (part of the test team).
"Today, TomTom announces its new flagship series TomTom GO x50 LIVE. The three portable navigation devices; TomTom GO 950 LIVE, TomTom GO 750 LIVE and TomTom GO 550 LIVE"




This announcement is no secret. already on several tech/gps/navigation forums the x50 was being mentioned before the today's official announcement

Wednesday, September 02, 2009

Testing vs. Checking

Dotting i's and Crossing t's about Testing.
Good article making some good points.
Not sure i agree with the specifications part... one needs at least pointers (user stories, business needs,...) if not to test but to design a good set of comprehensive test cases.But that's differs on product, organization,...
Good article that could've gone further to Developers/Programmers check (but normally don't test.)
This is a usual confusion on Testing and hiring Developers/Programmers to do "testing" or set up "Testing". It's at least (and to keep it very simple) a different mind set.
That said, sometimes checking is good enough for some business needs.

Testing vs. Checking: "This posting is an expansion of a lightning talk that I gave at Agile 2009. Many thanks to Arlo Belshee and James Shore for providing the platform. Many thanks also to the programmers and testers at the session for the volume of support that the talk and the idea received. Special thanks to Joe (J.B.) Rainsberger. Spread the meme!

There is confusion in the software development business over a distinction between testing and checking. I will now attempt to make the distinction clearer.

Checking Is Confirmation

Checking is something that we do with the motivation of confirming existing beliefs. Checking is a process of confirmation, verification, and validation. When we already believe something to be true, we verify our belief by checking. We check when we've made a change to the code and we want to make sure that everything that worked before still works. When we have an assumption that's important, we check to make sure the assumption holds. Excellent programmers do a lot of checking as they write and modify their code, creating automated routines that they run frequently to check to make sure that the code hasn't broken. Checking is focused on making sure that the program doesn't fail.

Testing Is Exploration and Learning

Testing is something that we do with the motivation of finding new information. Testing is a process of exploration, discovery, investigation, and learning. When we configure, operate, and observe a product with the intention of evaluating it, or with the intention of recognizing a problem that we hadn't anticipated, we're testing. We're testing when we're trying to find out about the extents and limitations of the product and its design, and when we're largely driven by questions that haven't been answered or even asked before. As James Bach and I say in our Rapid Software Testing classes, testing is focused on 'learning sufficiently everything that matters about how the program works and about how it might not work.'

Checks Are Machine-Decidable; Tests Require Sapience

A check provides a binary result—true or false, yes or no. Checking is all about asking and answering the question 'Does this assertion pass or fail?' Such simple assertions tend to be machine-decidable and are, in and of themselves, value-neutral.

A test has an open-ended result. Testing is about asking and answering the question 'Is there a problem here?' That kind of decision requires the application of many human observations combined with many value judgements.

When a check passes, we don't know whether the program works; we only know that it's still working within the scope of our expectations. The program might have serious problems, even though the check passes. To paraphrase Dkijstra, 'checking can prove the presence of bugs, but not their absence.' Machines can recognize inconsistencies and problems that they have been programmed to recognize, but not new ones. Testing doesn't tell us whether the program works either—certainty on such questions isn't available—but testing may provide the basis of a strong inference addressing the question 'problem or no problem?'

Testing is, in part, the process of finding out whether our checks have been good enough. When we find a problem through testing, one reasonable response is to write one or more checks to make sure that that particular problem doesn't crop up again.

Whether we automate the process or not, if we could express our question such that a machine could ask and answer it via an assertion, it's almost certainly checking. If it requires a human, it's a sapient process, and is far more likely to be testing. In James Bach's seminal blog entry on sapient processes, he says, 'My business is software testing. I have heard many people say they are in my business, too. Sometimes, when these people talk about automating tests, I think they probably aren’t in my business, after all. They couldn’t be, because what I think I’m doing is very hard to automate in any meaningful way. So I wonder... what the heck are they automating?' I have an answer: they're automating checks.

When we talk about 'tests' at any level in which we delegate the pass or fail decision to the machine, we're talking about automated checks. I propose, therefore, that those things that we usually call 'unit tests' be called 'unit checks'. By the same token, I propose that automated acceptance 'tests' (of the kind Ron Jeffries refers to in his blog post on automating story 'tests') become known as automated acceptance checks. These proposals appeared to galvanize a group of skilled programmers and testers in a workshop at Agile 2009, something about which I'll have more to say in a later blog post.)

Testing Is Not Quality Assurance, But Checking Might Be

You can assure the quality of something over which you have control; that is, you can provide some level of assurance to some degree that it fulfills some requirement, and you can accept responsiblity if it does not fulfill that requirement. If you don't have authority to change something, you can't assure its quality, although you can evaluate it and report on what you've found. (See pages 6 and 7 of this paper, in which Cem Kaner explains the distinction between testing and quality assurance and cites Johanna Rothman's excellent set of questions that help to make the distinction.) Testing is not quality assurance, but acts in service to it; we supply information to programmers and managers who have the authority to make decisions about the project.

Checking, when done by a programmer, is mostly a quality assurance practice. When an programmer writes code, he checks his work. He might do this by running it directly and observing the results, or observing the behaviour of the code under the debugger, but often he writes a set of routines that exercise the code and perform some assertions on it. We call these unit 'tests', but they're really checks, since the idea is to confirm existing knowledge. In this context, finding new information would be considered a surprise, and typically an unpleasant one. A failing check prompts the programmer to change the code to make it work the way he expects. That's the quality assurance angle: a programmer helps to assure the quality of his work by checking it.

Testing, the search for new information, is not a quality assurance practice per se. Instead, testing informs quality assurance. Testing, to paraphrase Jerry Weinberg, is gathering information with the intention of informing a decision, or as James Bach says, 'questioning a product in order to evaluate it.' Evaluation of a product doesn't assure its quality, but it can inform decisions that will have an impact on quality. Testing might involve a good deal of checking; I'll discuss that at more length below.

Checkers Require Specifications; Testers Do Not

A tester, as Jerry Weinberg said, is 'someone who knows that things can be different'. As testers, it's our job to discover information; often that information is in terms of inconsistencies between what people think and what's true in reality. (Cem Kaner's definition of testing covers this nicely: 'testing is an empirical, technical investigation of a product, done on behalf of stakeholders, with the intention of revealing quality-related information of the kind that they seek.')

We often hear old-school 'testing' proponents claim that good testing requires specifications that are clear, complete, up-to-date, and unambiguous. (I like to ask these people, 'What do you mean by 'unambiguous'?' They rarely take well to the joke. But I digress.) A tester does not require the certainty of a perfect specification to make useful observations and inferences about the product. Indeed, the tester's task might be to gather information that exposes weakness or ambiguity in a specification, with the intention of providing information to the people who can clear it up. Part of the tester's role might be to reveal problems when the plans for the product and the implementation have diverged at some point, even if part of the plan has never been written down. A tester's task might be to reveal problems that occur when our excellent code calls buggy code in someone else's library, for which we don't have a specification. Capable testers can deal easily with such situations.

A person who needs a clear, complete, up-to-date, unambiguous specification to proceed is a checker, not a tester. A person who needs a test script to proceed is a checker, not a tester. A person who does nothing but to compare a program against some reference is a checker, not a tester.

Testing vs. Checking Is A Leaky Abstraction

Joel Spolsky has named a law worthy of the general systems movement, the Law of Leaky Abstractions ('All non-trivial abstractions, to some degree, are leaky.'). In the process of developing a product, we might alternate very rapidly between checking and testing. The distinction between the two lies primarily in our motivations. Let's look at some examples.
  • A programmer who is writing some new code might be exploring the problem space. In her mind, she has a question about how she should proceed. She writes an assertion—a check. Then she writes some code to make the assertion pass. The assertion doesn't pass, so she changes the code. The assertion still doesn't pass. She recognizes that her initial conception of the problem was incomplete, so she changes the assertion, and writes some more code. This time the check passes, indicating that the assertion and the code are in agreement. She has an idea to write another bit of code, and repeats the process of writing a check first, then writing some code to make it pass. She also makes sure that the original check passes. Next, she sees the possibility that the code could fail given a different input. She believes it will succeed, but writes a new check to make sure. It passes. She tries different input. It fails, so she has to investigate the problem. She realizes her mistake, and uses her learning to inform a new check; then she writes functional code to fix the problem and pass the check.

    So far, her process has been largely exploratory. Even though she's been using checks to support the process, her focus has been on learning, exploring the problem space, discovering problems in the code, and investigating those problems. In that sense, she's testing as she's programming. At the end of this burst of development, she now has some functional code that will go into the product. As a happy side effect, she has another body of code that will help her to check automatically for problems if and when the functional code gets modified.

    Mark Simpson, a programmer that I spoke to at Agile 2009, said that this cyclic process is like bushwhacking, hacking a new trail through the problem space. There are lots of ways that you could go, and you clear the bush of uncertainty around you in an attempt to get to where you're going Historically, this process has been called 'test-driven development', which is a little unfortunate in that TDD-style 'tests' are actually checks. Yet it would be hard, and even a little unfair, to argue that the overall process is not exploratory to a significant degree. Programmers engaged in TDD have a goal, but the path to the goal is not necessarily clear. If you don't know exactly where you're going to end up and exactly how you're going to get there, you have to do some amount of exploration. The moniker 'behavior-driven development' (BDD) helps to clear up the confusion to some degree, but it's not yet in widespread adoption. BDD uses checks in the form '(The program) should...', but the development process requires a lot of testing of the ideas as they're being shaped.
  • Now our programmer looks over her code, and realizes that one of the variables is named in an unclear way, that one line of code would be more readable and maintainable expressed as two, and that a group of three lines could more elegantly and clearly expressed as a for loop. She decides to refactor. She addresses the problems one at a time, running her checks after each change. Her intention in running these checks is not to explore; it's confirm that nothing's been messed up. She doesn't develop new checks; she's pretty sure the old ones will do. At this point, she's not really testing the product; she's checking her work.
  • Much of the traditional 'testing' literature suggests that 'testing' is a process of validation and verification, as though we already know how the code should work. Although testing does involve some checking, a program that is only checked is likely to be poorly tested. Much of the testing literature focused on correctness—which can be checked—and ignores the sapience that is necessary to inform deeper questions about value, which must be tested. For example, that which is called 'boundary testing' is usually boundary checking.

    The canonical example is that of the program that adds two two-digit integers, where values in the range from -99 to 99 are accepted, and everything else is rejected. The classic advice on how to 'test' such a program focuses on boundary conditions, given in a form something like this: 'Try -99 and 99 to verify that valid values are accepted, and try -100 and 100 to verify that invalid values are rejected.' I would argue that these 'tests' are so weak as to be called checks; they're frightfully obvious, they're focused on confirmation, they focus on output rather than outcome, and they could be easily mechanized.

    If you wanted to test a program like that, you'd configure, operate, observe the product with eyes open to many more risks, including ones that aren't at the forefront of your consciousness until a problem manifests itself. You'd be prepared to consider anything that might threaten the value of the product—problems related to performance, installability, usability, testability, and many other quality criteria. You'd tend to vary your tests, rather than repeating them. You'd engage curiosity, and perform a smattering of tests unrelated to your current models of risks and threats, with the goal of recognizing unanticipated risks. You might use automation to assist your exploration; perhaps you would use automation to generate data, to track coverage, to parse log files, to probe the registry or the file system for unanticipted effects. Even if you used automation to punch the keys for you, you'd use the automation in an exploratory way; you'd be prepared to change your line of investigation and your tactics when a test reveals surprising information.
  • The exploratory mindset is focused on questions like 'What if...?' 'I wonder...?' 'How does this thing...?' 'What happens when I...?' Even though we might be testing a program with a strongly exploratory approach, we will engage a number of confirmatory kinds of ideas. 'If I press on that Cancel button, that dialog should go away.' 'That field is asking for U.S. ZIP code; the field should accept at least five digits.' 'I'll double-click on 'foo.doc', and that file should open in Microsoft Word on this system.' Excellent testers hold these and dozens of other assumptions and assertions as a matter of course. We may not even be conscious of them being checks, but we're checking sub-consciously as we explore and learn about the program. Should one of these checks fail, we might be prompted to seek new information, or if the behaviour seems reasonable, we might instead change our model of how the program is supposed to work. That's a heuristic process (a fallible means of solving a problem or making a decision, conducive to learning; we presume that a heuristic usually works but that it might fail).

  • Also at Agile 2009, Chris McMahon gave a presentation called 'History of a Large Test Automation Project using Selenium'. He described an approach of using several thousand automated checks (he called them 'tests') to find problems in the application with testing. How to describe the difference? Again, the difference is one of motivation. If you're running thousands of automated checks with the intention of demonstrating that you're okay today, just like you were yesterday, you're checking. You could use those automated checks in a different way, though. If you are trying to answer new questions, 'what would happen if we ran our checks on thirty machines at once to really pound the server?' (where we're focused on stress testing), or 'what would happen if we ran our automated checks on this new platform?' (where we're focused on compatibility testing), or 'what would happen if we were to run our automated checks 300 times in a row?' (where we're focused on flow testing), the category of checks would be leaking into testing (which would be a fine thing in these cases).
There will be much, much more to say about testing vs. confirmation in the days ahead. I can guarantee that people won't adopt this distinction across the board, nor will they do it overnight. But I encourage you to consider the distinction, and to make it explicit when you can.
"

Monday, August 31, 2009

Fiat Punto Evo to Feature "Blue&Me - TomTom": The New Integrated ...

Basic alterations on the Tomtom code to pave the way for this cooperation was tested by... me :-)

Fiat Punto Evo to Feature "Blue&Me - <b>TomTom</b>": The New Integrated <b>...</b>: "AMSTERDAM--(Business Wire)-- Fiat Group Automobiles and TomTom announce that the two companies have jointly developed an integrated portable navigation ...


See all stories on this topic
"

Saturday, August 29, 2009

Test Estimation Is Really Negotiation

Good article.
Nice metaphors on Testing... and so true, H is always it. And H can be negotiated or pondered (risk analyses). That will determine test coverage and therefore the quality analyses information of the software.
Believing in the testers maxim "There's no such thing as a bug free software"
Time constraints are the everyday ghost for Quality analysts.

"Test Estimation Is Really Negotiation: "Some of this posting is based on a conversation from a little while back on TestRepublic.com.

If anyone has a problem with 'test estimation', here's a thought experiment:

Your manager (your client) wants to give you an assignment: to evaluate someone's English skills, with the intention of qualifying him to work with your team. So how long would it take you to figure out whether a Spanish-speaking person spoke English well enough to join your team? Ponder that for a second, and then consider a few different kinds of Spanish-speaking people:

1) The fellow who, in response to every question you ask in English, replies, 'Que?'

2) The fellow who speaks very articulately, until you mention the word 'array'. And then he says, 'Que?'

3) The fellow who spouts all kinds of technical talk perfectly, but when you say, 'Let's go for lunch,' says 'Que?'

4) The fellow who speaks perfectly clearly, but every now and then spouts an obscenity.

5) The fellow who speaks English perfectly, but has no technical ability whatsoever.

6) The fellow who has great technical chops and speaks better English than the Queen, but spits tobacco juice in the corner every minute and a half.

How long you need to test a candidate's capacity to speak English isn't a question that has a firm answer, since the answer surely depends on

a) the candidate;
b) the extent to which you and the client want to examine them;
c) the mission upon which the candidate will be sent;
d) the information that you discover about the candidate;
e) the demands and schedule of the project for which you're qualifying candidates;
f) the criteria upon which your client will decide they have enough information;
g) the amount of money and resources that the client is prepared to give you for your evaluation;
h) the amount of time that the client is prepared to give you.

So, yes, you can provide an estimate. Your client will often demand one. Mind you, since (H) is going to constrain your answer every time, you might as well start by asking the client how long you have to test. If the client answers with a date or a time, you don't have to estimate how long it's going to take you.

Suppose the client doesn't provide a date. Do you know anything about the candidate? Before the interview, you find out that he's only ever been a rickshaw driver; no previous experience with testing; no previous experience with computers. He speaks no English, but has a habit of screaming at the top of his lungs once every twenty minutes. In this case, you probably don't have to estimate. It would take less time to report to your client that the candidate is likely to be unsuitable than it would to prepare an estimate for how long it will take to evaluate him. Why bother?

So here's another candidate. This woman has been working at Microsoft for ten years, the first eight as a tester and the last two as a test lead. Her references have all checked out. The mission is to test a text-only Website of three pages, no programmatic features. In this case, you probably won't have to estimate. It would take less time to report to your client that the candidate is likely to be qualified (or overqualified) than it would to prepare an estimate. Why bother?

The information that you discover in your evaluation of the candidate's English skills is to a large degree unpredictable. The problem that sinks him might not be related to his English, and you might not discover a crucial problem until after he's been hired. The problems that you discover might be deemed insufficient to disqualify him from the job, since ultimately it's the manager who's going to decide.

So instead of thinking about estimation in testing, think about negotiation. Testing is an open-ended task, and it must respond to development work. The quality of that development work and the problems that we might find are open questions (if they weren't, we wouldn't be testing). In addition, the decision to ship the product (which includes a decision to stop testing) is a business decision, not a technical one.

In cases where you don't know things about the candidate, you can certainly propose a suite of questions and exercises that you'll put them through, and negotiate that with the client. In case of the first candidate, the very first bit of information that your receive is likely change all of your choices about what to ask them and how you're going to test them. In the second case, your interview will probably be quick too, but for the opposite reason. It's in the cases in between, when you're dealing with uncertainty and want to dispel it, that your testing will take somewhat longer, will require probing and investigation of questions that arise during the interview—and that may require extra time that you may have to negotiate with your client. One thing for sure: you probably don't want to spend so much time designing the protocol that it has a serious negative impact on your interviewing time, right?

For those who are still interested (or unconvinced) and haven't seen it, you might like to look at this:

http://www.developsense.com/2007/01/test-project-estimation-rapid-way.html
"

My Tracks for Android is a Fitness Geek's Dream [Downloads]

Nice Android app

My Tracks for Android is a Fitness Geek's Dream [Downloads]: "

Android: If the GPS mapping and performance analysis of apps like RunKeeper give your Android phone iPhone envy, Google's got a geeky alternative. My Tracks plots runs, cycles, and other fitness forays to custom Google Maps or a Google spreadsheet.

You get far more than just a line showing where you went on your last trip with My Tracks. While running or in front of your My Maps account, can see your elevation profile over time or distance, check your speeds, set up waypoints for longer sojourns, and share your GPX or KML output over email, Twitter via Twidroid, or export the data to your SD card.

Google recommends Android users download a few 'sister applications' to make My Tracks more useful, including the aforementioned Twidroid, the Power Manager app to keep My Tracks from completely dousing your (admittedly spare) battery reserves, and the My Maps Editor to view and share your exercise maps. My Tracks still lacks the slick looks of RunKeeper or Nike+, but if you're really impressed by raw data and beautiful statistics, you can get a whole lot of them, as evidenced in the example here (not mine, believe me).

My Tracks is a free download for Android-based phones.



"