Hi Petr, all,
Le 20/12/2010 18:09, Petr Mladek a écrit :
Hi Jean,
I am sorry for the late repply. I have been busy with LO-3.3-rc2.
I am impatient to test it. ;-)
Jean-Baptiste Faure píše v Čt 16. 12. 2010 v 06:05 +0100:
Le 13/12/2010 15:04, Petr Mladek a écrit :
This is why I proposed 7 days instead 5 days. 7 days have the Weekend
included by definition :-)
Not really if the period start just before the WeekEnd. If a new RC is
released on Friday for QA-test, most of the testers will not be able to
start their tests before Monday.
Fair enough. We will try to do not release on Friday. Alternatively,
would it help if we pre-announce the upcoming rc2 few days before it is
available?
Yes.
It might always be delayed because of technical problems. Though, I
think that we could predict it pretty well two days before it is
published.
In that case we are able to alert qa-team. It's ok for me.
Hmm, I think that OOo-3.3 release is not a good example:
Yes, from RC cycle point of view, it is an example that LibO should not
follow.
The process is not sufficiently clear:
Yeah, we are still setting the processes up.
what QA-team must do when a stopper is found and known?
The current solution is described at
http://wiki.documentfoundation.org/Release_Criteria#Blocker_Bug_Nomination
My question was about to continue the tests or not after the first
stopper has been found. ;-)
If a new RC is build immediately, a second
stopper can be fixed only on this new RC so it is not interesting to
continue to test the old RC. It is more true because the new RC can
introduce regression to the old one.
You are right that any fix could bring regression. Though, I think that
it always makes sense to continue with testing:
+ there might be more blocker bugs; they are usually
independent; it is better to find all blockers as soon as
possible
+ the fix for the stable branches should be tested and reviewed
before they are committed; it should reduce the number of
regressions to minimum; the question is if we really need to
repeat the whole QA for each rc; I think that it should depend
on the number and complexity of the fixes; this information
should be mentioned in the announce mail
+ it takes few days to produce new rc (time needed to fix the
bug, do some testing, build in Windows, Linux, MAC; upload to
mirrors)
I agree .
So I think we need a very clear QA test process: known dates for RC,
I am not sure if we want a regular schedule, e.g. do the rc release
every Thursday and every second Thursday. I think that such approach
would wok well for alpha/beta builds.
I meant the date of the next RC when a stopper has been found. In other
words the aim is to give informations to the volunteers so that they can
decide if they have to do test now or if they can wait the next WE.
In case of release candidates, I think that it is better to do them
according to the current state. I mean to release them when all blockers
are fixed or when there are more that 2 weeks without release.
Note that we plan to upload also daily builds in the future. They might
be used to verify fixes even before the official release.
when it is useful to search blockers and when we can stop to search
stoppers until a new RC is released.
I think that it makes sense only when the application does not start at
all, so that the blocker bug blocks the testing completely.
There is an idea of more micro bug fix releases. The published minor
release should be well usable for 99% of normal users. The micro
releases could be done every next month or so. They would include just
safe and reviewed bug fixes and should need not full QA. The second or
third micro release should be well usable for 99% experienced users.
Ok for me. The only condition I see to make the end-users happy, is that
the status of each version released must be very clear for the end-user.
Individual users or small organizations may want use always the last
micro-release while bigger organizations like administrations will have
a longer cycle of update.
Yes, we need to make it clear.
Also I think that it does not make sense that every national team would
do the whole testing. IMHO, 95% of the functionality is language
independent, so most of the testing can be shared and distributed.
Right. For OOo, NL QA-team do only Release Sanity Test. But the current
test period on OOo 3.3.0 shows that the last blockers have been found by
the Community and these blockers could not be found by automatic or TCM
tests; they have been found by users who have done tests on their
documents they use in the real life.
To do that we need time.
I see. Well, I think that even the released OOo version included pretty
annoying bugs and some of them would be considered as blockers. Such
bugs were usually fixed in the micro release x.y.1.
We just need to define the right compromise that would be acceptable for
all sides developers, testers, and users
sure
Just to get better picture. How much time would you need to finish all
known manual tests?
For OOo and Release Sanity Test, time needed is about 2 to 4 hours to
complete all the 25 tests in the scenario.
Okay, this sounds doable within the week if you can plan it.
But, as said the last blockers are out of the scope of these tests and
have been found by test-cases and files from the real life.
I guess that many of these bugs were found by normal users that used OOo
in their daily work. I think that it actually supports the idea for more
frequent releases and tested/reviewed fixes.
+1
The manual testing makes sense and is really helpful, definitely.
Though, it could not verify the complete functionality within days or
even weeks.
+1
Best regards
JBF
--
Seuls des formats ouverts peuvent assurer la pérennité de vos documents.
Context
Privacy Policy |
Impressum (Legal Info) |
Copyright information: Unless otherwise specified, all text and images
on this website are licensed under the
Creative Commons Attribution-Share Alike 3.0 License.
This does not include the source code of LibreOffice, which is
licensed under the Mozilla Public License (
MPLv2).
"LibreOffice" and "The Document Foundation" are
registered trademarks of their corresponding registered owners or are
in actual use as trademarks in one or more countries. Their respective
logos and icons are also subject to international copyright laws. Use
thereof is explained in our
trademark policy.