[257313 views]

[]

Odi's astoundingly incomplete notes

New entries

Code

back | next

Webapp scalability

While researching about the scalability of the Wicket framework, I came across what seems like a widespread myth. In a couple of mailinglists and forums people were basically saying: "Scalability is basically limited by how much data is stored in a session. But memory is cheap, so don't worry". Sad, that this just slightly misses the point.

Scalability is limited by 4 things: Real applications typically hit only one of these limits. If we just look at the web framework, then the DB is out of our focus and we assume that this is not the bottleneck. If you're not doing video streaming then network bandwidth is not likely to be an issue for you. And if you're doing video streaming you better have plenty of spare bandwidth anyway. If you are not doing file serving (or again streaming) disk throughput will also not be an issue. What remains is CPU and memory. If CPU is your limit (the webapp does number crunching), then there is not much you can do: optimize your algorithms. So the only really interesting limiting factor is memory.

Now in Java we have a garbage collected heap. So memory is not completely independent from CPU. GC can consume a fair amount of CPU and it causes latency. So it is really important that the application does as little garbage collection as possible. In a generational GC heap (Concurrent Mark and Sweep - CMS) there is also cheap and fast garbage collection of the relatively small Eden space (the youngest generation) and expensive and slow garbage collection of the Old generation (the bulk of the heap). It's key that you prevent the big collections from happening. They really hurt. Rather have 10'000 small collections instead of 1 big one. That means that you should not "churn" objects, be conservative with what you allocate. And it means that the ones that you allocate temporarily (during a request or shorter) should have the shortest possible lifetime. Each object that survives the Eden collection is a performance killer: it has to be moved to the next generation and will have to be collected from there eventually by a big collection!

So if each request produces 1 MB of garbage and your Eden space is 100 MB, each 100 requests will cause a small collection. Fine if these are the only requests on the system. The requests are over, so all objects are unreferenced and can be collected. The collection will be quick and efficient.
If you now make 200 requests in parallel, the Eden space will be full half-way through the time. If those objects are still referenced now they can not be collected and will be moved to the next generation and you have hit the scalability limit: the Old generation will quickly fill up and big collections will have to run, comsuming CPU and causing latency. If however most of these objects are no longer referenced, they can be collected quickly and your webapp still scales.

What about session state now? Session state doesn't change much usually. So it is more or less constant. Its objects will live in the Old (or even permanent) generation. It's certainly a bad idea to constantly add and remove objects to/from the session, because these objects will have to be collected in the Old generation by a big collection. Session state is also really small usually. Just a few KB. Do the maths: a 2 GB heap can hold 20'000 sessions of 100 KB. That's more than enough and will not be the bottleneck. Of course don't store huge data in sessions.

To sum up this posting: if you want a scalable Java webapp: For frameworks that means: Java objects that are infamous for a lot of overhead in terms of object allocation:
posted on 2009-03-27 01:03 UTC in Code | 1 comments | permalink
All the GC algorythms SUN/Oracle's JVM supports are generational, not just CMS.

CMS is able to continue application-execution while the old-gen is collected at the expense of throughput.
ParallellOldGC will stop all threads during a GC cycles -> longer pauses, but has lower overhead in general.

- Clemens

Gentoo e2fsprogs-libs, ss and com_err

Today Gentoo released some updates that cause a conflict. It's resolution may damage your system:
[ebuild     U ] sys-fs/e2fsprogs-1.41.2 [1.40.9] 
[ebuild  N    ] sys-libs/e2fsprogs-libs-1.41.2  USE="nls" 
[ebuild     U ] net-fs/nfs-utils-1.1.3 [1.1.0-r1] 
[blocks B     ] sys-libs/ss (is blocking sys-libs/e2fsprogs-libs-1.41.2)
[blocks B     ] <sys-fs/e2fsprogs-1.41 (is blocking sys-libs/e2fsprogs-libs-1.41.2)
[blocks B     ] sys-libs/com_err (is blocking sys-libs/e2fsprogs-libs-1.41.2)
[blocks B     ] sys-libs/e2fsprogs-libs (is blocking sys-libs/ss-1.40.9, sys-libs/com_err-1.40.9)
The reason for the block is that ss and com_err have both been integrated into e2fsprogs-libs. Blocks are normally not a problem and happen from time to time, and can easily be resolved. Also this one can seemingly easily be resolved by unmerging ss and com_err and then merging e2fsprogs-libs. But ss and com_err are part of the system profile and unmerging them even for a short time is dangerous.
Be really careful NOT to unmerge sys-libs/ss and sys-libs/com_err from your system! These are critical core libraries that are used by dozens of programs (such as wget, curl, openssh,  apache, postfix), which will be broken after unmerging. There is a manual upgrade path in comment#7 of bug 234907. But I suggest you wait until this is resolved automatically by the next version of portage. Pretty sure the Gentoo developers will fix this in the portage tree soon.

Well, they haven't so far. So here is the paranoid way to do it:
# don't depend on the net later
emerge -fuD world
# backup in case something goes wrong
quickpkg ss com_err
# unmerge
emerge -C ss com_err e2fsprogs
# merge
emerge -1 e2fsprogs-libs
emerge e2fsprogs
# fix broken things
revdep-rebuild


posted on 2008-10-28 10:34 UTC in Code | 37 comments | permalink
Since e2fsprogs-libs provides ss and com_err you should be able to safely unmerge them temporarily as long as you have the new e2fsprogs-libs and e2fsprogs source downloaded already.

Here's what I did:

emerge -auDNv --fetchonly world
emerge -C ss com_err
emerge -auDNv --oneshot e2fsprogs-libs e2fsprogs
emerge -auDNv world

All is well and everything is updated.

--Bkrumme
"emerge -C ss com_err"

ARE YOU SERIOUS??

I have a clue too:

emerge -C gentoo
worked for me too!
Should work -- not having com_err messes up wget, but if you do --fetchonly first, wget never gets called, and all is well.
Run into "block" today. I'm curious as to why this happens, is it developers mistake? And why it's not being resolved in timely fashion especially since it affects and affected many Gentoo users.
Bicer.
Blocks happen from time to time. Especially when the package structure is reorganized like in this case. That's perfectly normal and not usually a problem.

But this case is especially bad, because it requires unmerging packages from the system profile. Unlike during a normal update of such a package there will be a short period where the package is not installed, thus breaking dozends of programs, that may in turn be necessary to merge the updated package. You can end up in a vicious circle if you don't know what you are doing, requiring to boot from rescue media.
Unmerging com_err, as I did, is really a big mistake. Running emerge was not even possible afetr that.

Problem is I saw this post only after I did the mistake. Here is what I did to temporarily have a working system, until developpers have fixed the problem: 1) I downloaded (with Firefox, Konqueror was broken as well) a Gentoo live CD, then booted hthe CD with nox parameter to have the root prompt; 2) I ran "equery files com_err" to see what files I stupidely erased; 3) I mounted the partition of my hard drive and copyed the files indicated by the equery output; 4) reboot worked.

This is an awfull thing to do, only people like me that don't know anything about computers can do that. I hope you don't have to do it.
if you run into the problem that you unmerged com_err and your portage is not working anymore, DONT PANIC. Do NOT log off, or reboot!

just reemerge com_err

emerge com_err

this should work, since the source file should still be in /usr/portage/distfiles

if it is not, download e2fsprogs-1.40.9.tar.gz from any gentoo mirror (apparently I am not allowed to post the link here :( )
using firefox or links or another computer. Copy it it to /usr/portage/distfiles.

then run emerge =com_err-1.40.9

now you should be able to run

emerge -uDf world

this fetches all the files you need.

now you can safely do

emerge -C com_err && emerge -uD world

good luck!

Till Korten
Now, what you should not do:
emerge --unmerge sys-libs/ss sys-libs/com_err
emerge -aDuv world

Then Sh1t, ok, no problem, let's just use firefox -> failed. Lynx -> failed. wget -> failed.

Ok let's reboot and fetch the tarball from my dual boot. Done, and then reboot failed (unable to mount the root partition).

Sh1t : could not find the gentoo live CD.

Now I am running OpenSuse 10.3 (old Dvd from a magazine I had near my computer).

Bye gentoo, bye :-|

Joel
that's why u should make a backup at least twice a week and delete all the older backups. It is really a BIG help and really worth doing it.:) I have now the same block, maybe i will wait or make a backup before I try things with such a risk.

If anybody really know how to fix it just let me now ^^
Thank you, I was feeling a bit lost about this.

Mark
well, i did just unmerge e2fsprogs... and then continued the upgrade....
because i knew that that would not be much of a problem... the other two i didnt know...

problem solved...
Tried the upgrade path given in comment #7 of bugfix linked in initial post without any problem ...
I did what the first comment was saying, and everything worked like a charm.


Here 's what the guy was suggesting:
"""""""""""""""""""""""""""""""""""""""""""""""""""""
"""""""""""""""""""""""""""""""""""""""""""""""""""""
Since e2fsprogs-libs provides ss and com_err you should be able to safely unmerge them temporarily as long as you have the new e2fsprogs-libs and e2fsprogs source downloaded already.

Here's what I did:

emerge -auDNv --fetchonly world
emerge -C ss com_err
emerge -auDNv --oneshot e2fsprogs-libs e2fsprogs
emerge -auDNv world

All is well and everything is updated.

--Bkrumme
""""""""""""""""""""""""""""""""""""""""""""""""""""""
""""""""""""""""""""""""""""""""""""""""""""""""""""""


Regards,
Goratrix@GR
I did this and it works fine :-)

emerge -f system (prefetch all needed packages. IMPORTANT!)
emerge -avC com_err ss e2fsprogs
emerge -avuD system
revdep-rebuild
I pretty much did, what Joel did: I rebooted - and that probably wasn't the smartest thing to do.
Still no need to say goodbye to your gentoo-system.

1. get the gentoo boot-cd (what do you have a dual boot system for? ;-)) and boot it

2. make sure you can connect to the net

3. cd /
mount /dev/*placeofyourrootpartition* /mnt/gentoo
mount -t proc proc /mnt/gentoo/proc
chroot /mnt/gentoo /bin/bash
env-update
emerge =e2fsprogs-1.40.9

4. reboot and have your system back. Now you can try all the solutions suggested on this page.
And, guess what, this is much quicker than installing OpenSuse from that old magazine-dvd

Michael
Thanks Michael for your help. Anyway I did not have the gentoo live-cd available (something like the 2004.1).

Last week-end I trashed my opensuse (I quickly hated it).

I now run a debian unstable. I does work pretty much out of the box. Updates now look far less scary than they were with gentoo.

Joel
I did what the first post said, but the block was still there, so I had to unmerge e2fsprogs as well, then it worked. I haven't rebooted yet, but I see no reason why it shouldn't work. ;)
You absolutly need a root shell open to do this. sudo and su both require comm_err and won't work. You'll end up screwing yourself if you try to "sudo emerge e2fsprogs".
Using temporarily unstable portage:

#Unmask portage:
echo "=app-admin/eselect-news-20080320 **" >> /etc/portage/package.keywords
echo "=app-admin/eselect-1.0.11-r1 **" >> /etc/portage/package.keywords
echo "=sys-apps/portage-2.2_rc13" >> /etc/portage/package.keywords

#Resolve with new portage
emerge =sys-apps/portage-2.2_rc13
emerge e2fsprogs #will pull all other stuff

#reverting back to stable portage
vim /etc/portage/package.keywords #undo 3 unmasking made before
emerge eselect portage #will complain about all ebuilds of eselect-news masked
emerge --unmerge eselect-news #if previous step complained
#emerge eselect-news #otherwise
revdep-rebuild #probably unnecessary, in my case nothing important was rebuild
That worked well, thanks.

Stephen Reese
www.rsreese.com
There's an IMO better way:

$ quickpkg e2fsprogs com_err ss
$ emerge -BO e2fsprogs e2fsprogs-libs
$ emerge -C ss com_err
$ emerge -K e2fsprogs e2fsprogs-libs

Using -BO, emerge will build the new versions of e2fsprogs and e2fsprogs-libs without checking for and complaining about the blocks (This, however, only works if all other dependencies are up to date. You may have to emerge -pvuD e2fsprogs first and emerge -1 all dependencies other than the blocks.) and store the newly compiled files in a tbz2 in your $PKGDIR. Once ss and com_err are removed, you now only need a functioning tar and bz2 (and, of course, portage) to replace the missing libraries. And if portage were to break, untarring the package archives would be trivial.
big thanks - was pounding my head against emerge the last half hour
as i did it now, it seems quite easy to me:

emerge --fetchonly e2fsprogs-libs e2fsprogs
emerge -C ss com_err
emerge e2fsprogs e2fsprogs-libs
revdep-rebuild
I actually unmerged ss and com_err before I knew how important they are, but I apparently got extremely lucky since nothing broke. I was able to emerge e2fsprogs-libs (wget worked) and everything was fine. o_O
I managed to unmerge both com_err and ss and then emerge them back in again without calamity, once I realized I was on the road to a bricked system.

davidbalt
Wow.

Look, if "A" is updated to include (and block) "B", you can usually simply do

# emerge --onlydeps A
# emerge --nodeps -u A
# emerge -C B

Done this way, you can pull the plug on your box at any time during the process. Your system is never put into an inconsistent state, except perhaps for brief moments while the binaries are installed.

Sometimes things get a little more complicated than that of course... but rendering your system temporarily unbootable is not a good way to go if you can possibly avoid it.

-myosisok
A good tip is to always have a root term open before unmerging anything (since sudo/pam etc could break if you don't understand the modules that you're affecting).

I unmerged the libs, and broke wget, but was able to download and compile a static copy of wget and place it at /usr/bin/wget (saving the orig wget to wget.dynamic) and I was able to re-emerge the libs back in to resolve the block.

But unmerging said libs also broke sudo, so be sure to have a root term open.
Thank you! Worked like a charm.

-Chris
Oops. That's what I get for being Mr. Expert... I was quite wrong, above, when I suggested

# emerge --onlydeps A
# emerge --nodeps -u A
# emerge -C B

as a psedo generic solution.

In fact, this left me with an incomplete system.

After I did the above, I was quite broken because apparently there were some "backward compatibility" feature, so that when I compiled "A," in my case, the features of "B" were not built into the resulting package or install.

I had thought I would create a collision, but otherwise work great. No such luck. So I basically did

# emerge --onlydeps A
# emerge --nodeps -u A
# emerge -C B
# emerge -1 A

which still makes my box broken (but only for a bit). This is lame, and it's Gentoo's fault. If I weren't a lazy bastard (and even so, perhaps I will get around to it) I would file (or more likely, find, perhaps with a perfect solution :) ) a bug about this.
It may be a good Idea, sync first, otherwise works great. Good Job
Peter

Here's what I did:
emerge --sync
emerge -auDNv --fetchonly world
emerge -C ss com_err
emerge -auDNv --oneshot e2fsprogs-libs e2fsprogs
emerge -auDNv world
Try this URL: bugs.gentoo.org/show_bug.cgi?id=234907#c7

yolabingo
Lol, it's 14.03.2009 and this fix still works for this bug. :)
Yep, this is pretty bad. It's now April 2009 and it's still broken. I followed the last set of directions in the replies and it worked fine. Thanks.
July 2009, still broken and the last suggestion is still working.
Aug 1 2009, still broken!
so glad for this fix, the last one worked well, and completed it with revdep-rebuild
cheers
rdav
A better solution to solve this is to:

echo sys-libs/com_err >> /etc/portage/package.mask
echo sys-libs/ss >> /etc/portage/package.mask

emerge -pv mit-krb5 e2fsprogs

This will uninstall the com_err and ss automtically, without having to go through uninstalling them manually, which can be very damaging to the system.

This is explained in this bug resolution:
bugs.gentoo.org/show_bug.cgi?id=234907

#37 <- commented by #49

Oracle sqlplus hangs after killing processes

Something to remember when killing an Oracle instance manually (which should not be done really): clean up the shared memory allocations. Use the command ipcs to list the used shared memory segments, and ipcrm to remove them. Alternatively reboot the system.

If you fail to do that, sqlplus would not startup anymore and just consume a lot of CPU, instead of telling you that it can not allocate shared memory.

posted on 2008-10-17 10:28 UTC in Code | 1 comments | permalink
thanks, saved me a few hairs! f*cking cr*pware.
--
mtve

Disable Oracle's password expiry

 Unlike older releases, Oracle 11g sets password expiry by default. That's really annyoing
So let's get rid of these annoyances with:
ALTER PROFILE DEFAULT LIMIT
  FAILED_LOGIN_ATTEMPTS UNLIMITED
  PASSWORD_LIFE_TIME UNLIMITED;
And also let's turn off the default auditing:
NOAUDIT ALL;
DELETE FROM SYS.AUD$;


posted on 2008-10-03 18:26 UTC in Code | 14 comments | permalink
You have my vote on that one, managing SAP systems where no one ever physically signs on as the schema owner. Just used by SAP to connect using an encrypted password in a table.
First anyone knows there is a problem is when the SAP system stops.

Pain in the proverbial.

Brian Jones
Hold on... SAP is not a database.

Since you are talking about applications, Oracle Applications uses the same scheme you mentioned for passwords as in SAP (at the application level).
create a new profile that is set as you want it to be and assign the application and system users to that profile. Then you can configure the default pofile to enforce password management standards for end users. I've been doing his since v8i.
kbork
Agree SAP is not a database but it connects to Oracle with a standard user which if it has expiry set no one will know the pw has expired until the user is locked because no one will see the warnings and then SAP stops.
Brian
Oh, thank you so much. This really got us in trouble. What a stupid idea!
Although the language may be disputable the facts are stated in the clearest of manners.

Thanks very much

Thanassis
I shout out the one word. Hell yeah, thats what im searched for a long time. It solved exactly that Problem for my PeopleSoft Appservers.

Thanxs Man

Regards Zoebi
interesting: I'm not SAP experienced, instead PeopleSoft. But from this thread SAP implements "users" in a similar if not identical manner as peoplesoft. That is to say Users are defined within the APPLICATION, not at the oracle user level.

Oracl User 'people' is used to validate a peoplesoft user id... And then and only then is the peoplesoft user connect id switched over to the oracle account that actually owns the application and program code tables.

In short, apparently just like SAP, application users are never setup as oracle users.

And when the people id expires, etc, then all background processing most definitely croaks and with very misleading error information
I just spent a day trying to figure out why my app wouldn't run. Finally found that the Db user could not log on, app is like SAP where it uses a default user for the Db where users are controlled at app level. I hate it when vendors start making rules for you, where the rule was there if you wanted to use it. Windows does this stuff as well, drives me nuts.
Thanks for the info!
thanks for the very helpful post! =D
Thanks for this post!!
Thanks,
JerryQ
Thanks! V2Aso
Thanks,Very helpful post :)

Don't generate WSDL

The whole world, please stop generating WSDLs! It's bollocks.

A WSDL file is an interoperability document. Other applications depend on it. For this reason its content must be fully under your control. The exact content of this file must not depend on some half-baked decision of some stupid WSDL generator that doesn't know what it's doing.

The real world out there is unfortunately as full of bad WSDL code generators as it is full of broken WSDL interpreters. Some do not properly support different namespaces. Some only accept a single embedded schema. Of course most only support SOAP binding. Some expect certain namespace prefixes etc. The bugs and madnesses are endless. So you must be able to tweak this file to work around certain interoperability issues. Sometimes even the users of your service may have to pre-process the WSDL with an XSLT transformation to accomodate for certain bugs. Use your favourite WSDL edtior. But be prepared to make manual changes in the XML file if necessary.

At the same time, be careful to be pragmatic about which WS features you use. The fancier stuff you do the more problems you will have to face.


posted on 2008-06-16 17:47 UTC in Code | 0 comments | permalink

ORA-24816 and Hibernate

This is a funny story. We experienced a strange Oracle error last night:
ORA-24816: Expanded non LONG bind data supplied after actual LONG or LOB column
I had never met that beast before. Ignore the fact that the error message is completely cryptic. Ignore the fact that Oracle would never mention the name of the table and column that causes the problem. The table in question was easy enough to find. It contains about 70 columns (a typical number for business data from a legacy system). Three of them had recently been converted to CLOBs, as they could exceed 4000 characters.

At a first glance the error looks completely random and really looks like a driver issue. But the case was at least reproducible with some well-known data. So I considered an Oracle JDBC driver update to 10.0.2.0.4. Looking at the long changelog that promises fixes for some scary bugs I hoped that they had fixed this one as well. But no, the error still occurs with this latest version.

As some postings suggested, rearranging the column order (can you believe it!!!) might help. Well, we are not controlling the SQL manually, but EJB3 with Hibernate as an implementation does that in the backend. Turning on SQL logging revealed that Hibernate chose to arrange the columns so that two LOB columns were next to each other. Interestingly the two columns were also adjacent to each other in the entity bean definition (mappping). In a desperate act I just rearranged the getters and setters for this field, so that there were plenty of non-LOB columns in between the two LOBs. It seems to work.

UPDATE: Actually it turns out that Oracle says it's a limitation, and that LONG bind variables must come last in a statement. That's easily done by rearranging the getters and setters in Hibernate. Unfortunately there is one case where that doesn't work: If you use joined inheritance, Hibernate will put the primary key join column last. And there is no way to change that, other than patching Hibernate.

If you can learn anything from that, then it is how crucial of a feature it is for an O/R mapping framework to give the developer complete control over the generated SQL if he needs to. It's the number one mistake of O/R frameworks to not allow that. This poses a very hard wall that you will sooner or later hit in any project.


posted on 2008-06-05 15:04 UTC in Code | 2 comments | permalink
I got this exception as well.
We find out that Hibernate sorts the list in insert alfabetically by names of the fields in the class.
We just added an 'x' before each lob field and it worked
this problem just started occurring in a legacy system (using really old Hibernate and less old Oracle).

Insert was made to a table with some clob fields and one varchar(1500). The problem seemed to occur when both the value saved to clob and the one saved to varchar were longer than 1000 symbols.

Rearranging column order was too hard to do, instead I changed the varchar(1500) to clob as well.

It seems to work for now at least.

Java performance tests

I have some JUnit test that exercise and compare the performance of certain classes. They usually look like this:
long start = System.currentTimeMillis();
test1();
long end = System.currentTimeMillis();
long duration1 = end - start;
start = System.currentTimeMillis();
test2();
end = System.currentTimeMillis();
long duration2 = end - start;
assertTrue(duration1 > duration2);

This is useful to detect performance regressions. During development of these tests I came across a very obvious but interesting fact: do not use differences of the system timers for these tests. With system timers I mean System.currentTimeMillis() and System.nanoTime().

Rather the ThreadMXBean.getCurrentThreadCpuTime() method should be used. This makes the test independent of the load on the system, most of scheduling artifacts and even the garbage collection. All in all I get very stable and consistent result from this accounting timer. So my tests now look like this:
ThreadMXBean mx = ManagementFactory.getThreadMXBean();
long start = mx.getCurrentThreadCpuTime();
test1();
long end = mx.getCurrentThreadCpuTime();
long duration1 = end - start;
start = mx.getCurrentThreadCpuTime();
test2();
end = mx.getCurrentThreadCpuTime();
long duration2 = end - start;
assertTrue(duration1 > duration2);

posted on 2008-06-02 15:47 UTC in Code | 0 comments | permalink

Oracle init script

Here is a simple init script for Oracle 10/11 on RedHat Linux. It assumes your instances to start/stop are listed in /etc/oratab and the oracle user's shell environment sets ORACLE_HOME correctly.
#!/bin/sh
#
# chkconfig: 345 98 10
# description: Oracle
#

#
# change the value of ORACLE to the login name of the
# oracle owner at your site
#

ORACLE=oracle

case $1 in
'start')
cat <<-"EOF"|su - ${ORACLE}
# Start Oracle Net
if [ -f ${ORACLE_HOME}/bin/tnslsnr ] ;
then
echo "starting Oracle Net Listener"
${ORACLE_HOME}/bin/lsnrctl start
fi
echo "Starting Oracle databases"
${ORACLE_HOME}/bin/dbstart
${ORACLE_HOME}/bin/emctl start dbconsole
EOF
;;
'stop')
cat <<-"EOF"|su - ${ORACLE}
echo "shutting down"
${ORACLE_HOME}/bin/emctl stop dbconsole
# Stop Oracle Net
if [ -f ${ORACLE_HOME}/bin/tnslsnr ] ;
then
echo "stopping Oracle Net Listener"
${ORACLE_HOME}/bin/lsnrctl stop
fi
echo "stopping Oracle databases"
${ORACLE_HOME}/bin/dbshut
EOF
;;
*)
echo "usage: $0 {start|stop}"
exit
;;
esac
#
exit

posted on 2008-05-08 14:20 UTC in Code | 0 comments | permalink

Upgrade kernel, get 30% higher Java performance

I don't know what causes this, but have a look at the latest Linux kernel performance benchmark results yourself. A 30% (!) performance gain of a Java benchmark is impressive. My guess is the new scheduler is responsible for it. Could make a huge difference for application servers that typically run hundreds of threads. It looks like it could have to do with VMs using sched_yield() and a kernel parameter controlling its behaviour.

posted on 2008-01-06 14:26 UTC in Code | 2 comments | permalink
do you have thoughts on why Volano went down? (or is down a good thing for volano?)
No, and I even have no idea what Volano is or does.

How big is int?

Roughly 24 days. That's the number of milliseconds that can be represented by a Java integer. Good to know when you have to be careful of int overflows in millisecond arithmetic and when you can allow for sloppyness.

posted on 2007-12-19 10:41 UTC in Code | 0 comments | permalink
back | next