Author Topic: Planet MySQL  (Read 11350 times)

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Odbaårg: What's in a logo (inspired by the ODBA logo) #cls
« Reply #15 on: July 19, 2009, 12:00:49 AM »
Odbaårg: What's in a logo (inspired by the ODBA logo) #cls
18 July 2009, 8:24 pm

First day at the Community Leadership Summit. Kurt will blog about our being here separately soon. I just wanted to say this quickly tossed together unconference is a huge success, with a lot of the Community leaders and intelligenzia present and networking. We get all the time questions about what is happening with MySQL, so even though we hadn't planned to, we did a session What's up with MySQL where we tried to explain our plans for the MariaDB community and Open Database Alliance, but also as objectively as possible answer any questions that came up. (The unconference rules strictly prohibit promoting any company, which Monty Program of course goes out of it's way to obey.)

Oh, if you're in the Bay area, definitively should consider coming for the second day of this free conference.

Anyway, I'm wearing the new Open Database Alliance T-shirts Kurt had made. This reminded me that I wanted for a long time blog about the logo (which I had no part in making):

read more



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Percona BOFs at OSCON
« Reply #14 on: July 18, 2009, 06:02:42 PM »
Percona BOFs at OSCON
18 July 2009, 3:04 pm

Talks are great.  I however very much like discussion and opinion share atmosphere of the Birds of a Feather sessions so we  host/co-host number of BOFs at the comming OSCON conference.

Future of MySQL Forks, Branches and Patches  I guess is the topic a lot of us are interested in.   Monty was going to Show Up and we also should see if we can get someone from Drizzle.  

Is Enterprise Flash Ready for Prime Time   Flash is cool and hot these days.  This is  the discussion session and I would really like to hear how well flash works for you whenever you’re using it for storage as a cache or as a part of your hybrid storage hierarchy.

Open Source Data Management    Is the BOF about  open source tools you’re using to deal with your data – storage, caching analytics.  I am especially interested hearing on unorthodox use of the software and successes of new technologies.  

I also expect there will be number of other BOFs we’ll attend as the time permits.  Monty was going to organize BOF on Open Database Alliance though I have not seen it listed yet.

Entry posted by peter |

No comment

Add to: | | | |



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Using an empty database (learn from your mistakes)
« Reply #13 on: July 18, 2009, 06:02:06 AM »
Using an empty database (learn from your mistakes)
18 July 2009, 4:20 am

I’ve been working on various different MySQL related issues and maintenance procedures some of which have not gone according to plan.  Here is a recipe that may help you avoid wasting a lot of time, especially if your database is large.

In order to do some of these tests make tests against a server configured identically to the one you plan to work on but instead which has no data. That is the mysql database needs to be complete but the other databases need to be dumped with the –no-data or -d options.  Don’t forget to also include any triggers or stored routines.

Now run the “procedure” on this “emtpy instance”. As it has no data most things run very quickly. So if you have issues you can repeat the procedure in no time. Restoring the instance too is easy as it’s tiny. This makes the whole procedure scriptable and you can be confident in the results.

Once you are satisfied that it works you know what will happen and you can run the SAME procedure on the real instance with a lot more confidence.

This procedure, while it does require to you build an extra instance for testing, is actually a much safer way to do many tests. It doesn’t help for certain scenarios where the content of the tables is important but it does save you a lot of wasted time.

You still may need to estimate how LONG certain tasks will take and that must be done separately, but is usually easier to do once you know what you need to measure.

It would certainly have saved me a lot of time when doing various 5.0 to 5.1 upgrades, some of which have given me some problems and also a simple thing like a failed ALTER TABLE which was working on a 50GB table and failed at the end after running for 18 hours due to a foreign key constraint issue. This problem needs to be addressed by MySQL, but to be fair to them I shouldn’t complain about the 18 hours I wasted because I did not follow the procedure I suggest above.



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
My OSCON 2009 Session: Taming your Data...
« Reply #12 on: July 18, 2009, 12:01:31 AM »
My OSCON 2009 Session: Taming your Data...
17 July 2009, 7:53 pm

Yes!

Finally, it's there: In a few hours, I will be flying off to San Franscisco to attend OSCON 2009 in San Jose, California. This is the first time I'm attending, and I'm tremendously excited to be there! The sessions look very promising, and I'm looking forward to seeing some excellent speakers. I expect to learn a lot.

I'm also very proud and feel honoured to have the chance to deliver a session myself. It's called Taming Your Data: Practical Data Integration Solutions with Kettle.

Unsurprisingly, I will be talkig a lot about Kettle, a.k.a. Pentaho Data Integration. Recently, I talked about Kettle too at the MySQL user's conference, and more recently, at a MySQL university session. Those sessions were focused mainly on how Kettle can help you load a data warehouse.

But...there's much more to this tool than just data warehousing, and in this session, I will be exploring rougher grounds, like making sense of raw imdb text files, loading and generating XML, clustering and more. This session will also be much more hands on demonstration than the Sakila sessions. If you're interested and you are also attending, don't hesitate to drop by! I'm looking forward to meeting you :)

And...because the topic of the session kind of relates to my upcoming book, "Pentaho Solutions: Business Intelligence and Data Warehousing with Pentaho and MySQL" (ISBN: 978-0-470-48432-6, 600+ pages, list price $50.00), my publisher Wiley decided to throw in a litte extra. Yup, that's right - I've got discount coupons for the book, so if you are interested in picking up a copy, or if you just want to give one away to a friend or colleague, come find me at my session (or somewhere else on OSCON) and I'll make sure you'll get one. Thanks Wiley!!

Anyway - I'm hoping to meet you there: see you soon!!!



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Log Buffer #154: a Carnival of the Vanities for DBAs
« Reply #11 on: July 18, 2009, 12:01:31 AM »
Log Buffer #154: a Carnival of the Vanities for DBAs
17 July 2009, 1:01 pm

Welcome to the 154th edition of Log Buffer, the weekly review of database blogs.  Let’s dive right in, shall we?

Oracle

On Radio Free Tooting, Andrew Clarke says, “No SQL, so what?” taking as his keynote something Nuno Souto said: “ . . . Google, Facebook, Myspace, Ning etcetc, and what they do as far as IT goes, are absolutely and totally irrelevant to the VAST majority of enterprise business.”

Aman Sharma gives   an overview of Library Cache on Arista’s Oracle Blog.

On The Dutch Prutser’s Blog, Harald van Breederode gives a lesson in rolling cursor invalidation. He writes, “ . . . I call DBMS_STATS to create a histogram and I expected that dependent cursors would be marked INVALID afterwards but this simply didn’t happen.  . . .  Somehow I forgot, or maybe completely missed, the fact that cursors are invalidated in a rolling fashion since the introduction of Oracle10g.”

Miladin Modrakovic looks into another 10g-ish thing—Wide Table Select (Row Shipping): “Row shipping is feature which allows row data from the datablock to be shipped directly to the client.  . . .   Aperently [sic], this feature had some issues in earlier version of 10g  and fix was to disable the ‘row shipping’ feature by default.Oracle introduced ‘fix’ in version 10.2 . . . ”

Dominic Brooks reports a gotcha in application contexts, “ . . . one of those feature behaviours which isn’t surprising, but you probably wouldn’t think about it unless you saw it.”

Who should tune SQL: the DBA or the developer?  So asks Iggy Fernandez on his blog, So Many Oracle Manuals, So Little Time, as he tries to reconcile two mutually exclusive assertions: “Generally, only the author of the SQL has all of the knowledge required to tune the SQL,” and “ . . . you do not need to understand other people’s SQL to tune it!”

SQL Server

Whomever should do the tuning, they will appreciate Brent Ozar’s SQL Server Index Tuning Tip: Identify Overlaps.  Brent says, “These tips and tricks pay off more than pouring money into hardware that might look good sitting in the datacenter, but doesn’t really make the application significantly faster.”

On Claypole’s World, James Rowland-Jones was calculating the ROI of DRY SQL vs FLY SQL.  “Pages of SQL obscured by layer upon layer of view definitions is horrendous to have to unpick when there is an issue.  However, investing time in making SQL DRY may not give you any real performance benefit.  . . .   Just remember that it could and it could also make things worse.  . . .  However, a little look at making your SQL FLY with a spot of index tuning might be just the ticket!”

John Paul Cook wants to know: would you like to be able to do minimally logged deletes?  “This is being advocated,” he writes, “for testing.  . . .  What is being sought is to be able to quickly delete large amounts of data without bloating the transaction log.”

Kevin Kline and his readers kick around loop optimization, when Kevin asks: Why Do I Keep Seeing This Mistake? “I’m still surprised that otherwise experienced and competent database programmers are still embedding very stable elements of their code inside of extensive looping operations rather than outside of them.  Thoughts?”

Andy Leonard has been thinking about the profession of the DBA, in light of non-relational DBMSs, and concludes that it’s a question of Art vs. Science. “I consider the database profession a craft. That makes it part art and part science,” writes Andy.

MySQL

In his item on helping The US Department Of Justice, Monty says, “I was yesterday, for the second time, on a call with the [DOJ] regarding how the Oracle/Sun deal could affect Open Source software, in particular MySQL and Java.  . . .  For those that are worried about the future of OSS software as part of the . . . deal, and the affect [sic] it may have on their business, the [DOJ] is encouraging companies that are dependent on MySQL/Java to contact them . . . ”

Mark Callaghan responds, “I don’t always agree with Monty but this time he is right. Now is the time to provide feedback on the merger.” Listen to Monty, he exhorts.

On the MySQL Performance Blog, Morgan Tocker offers three key things to know about moving MySQL into the cloud. “The question “what problems will I have when migrating to the cloud” gets asked often enough. If by cloud you mean Amazon EC2, then from a technical perspective there isn’t much that changes.  . . .  Having said that, there’s still a few potential gotchas . . .  If you can live with these three things, then hopefully your migration should work smoothly.”

While we’re on new ways of doing MySQL, Brian “Krow” Aker and his readers have a very worthwhile discussion of Drizzle, views and triggers. “In Drizzle right now we do not have views,” writes Brian.  “There are plans to add views which never ‘materialize’, but that is still a couple of milestones off.  . . .  One of the problems when talking about views is that the word “materialize” has been over used.  . . .  To ‘materialize’ a view, means that you take the view definition, turn it into a temporary table, and then join it against a query. In Drizzle we consider this a ‘no no’.”

Here’s a related item from Justin Swanhart: how to support COUNT(DISTINCT expression) expressions with Flexviews.  Justin writes, “I am seriously considering porting Flexviews directly into Drizzle. I’m excited about replication plugins as this may make it easy to produce the necessary table change logs to support the materialization logic. Drizzle is becoming completely plugin oriented, so eventually materialized view rewrite and other cool features could be implemented too as optimizer plugins.”

Ronald Bradford posts a lazyweb item understanding InnoDB MVCC. “I wanted to clearly document this situation so I could then seek the advice of the guru’s in InnoDB Internals such as Mark Callaghan, Percona and the Innodb development team for example. I’m happy to say I’m not a MySQL expert in every aspect of MySQL, specifically internals where I have not had the detailed time to read the code, and understanding all internal workings.”

PostgreSQL

The Postgres OnLine Journal has got another item on the new PostgreSQL 8.4: Faster array building with array_agg, “This takes a set of elements similar to what COUNT, SUM etc do and builds an array out of them. This approach is faster than the old used array_append, array_accum since it does not rebuild the array on each iteration.”

Greg Sabino Mullane was on the 8.4 beat, too, with his piece on Bucardo and truncate triggers. Greg writes, “One of the features that hasn’t gotten a lot of press, but which I’m excited about, is truncate triggers. This fixes a critical hole in trigger-based PostgreSQL replication systems, and support for these new triggers is now working in the Bucardo replication program.”

Peter Eisentraut published solid-state drive benchmarks with latency measurements, following up and earlier item on SSD benchmarks and the write cache.

select * from depesz; and its readers have a thorough discussion of getting list of unique elements.

Selena Deckelmann has posted a short summary with some nice pics, of her last day in Nigeria, where she has been giving some PostgreSQL training. The comments are quite fascinating, too.

IBM DBMSs

Conor O’Mahony brings news of interesting developments at IDUG: “If you visit the new IDUG Web site, you will see a nice new look-and-feel. However, you will also notice that there is a lot of new technical content available.”

Will Favero reminds us that Information on Demand (IOD) 2009 is on its way.  “[It] is going to be held in Las Vegas again this year on October 25-29, 2009 at the Mandalay Bay Hotel. I think you will find that this year’s IOD conference in Las Vegas will be bigger and better than any of the previous IOD conferences.  . . .  It was recently also announce that the keynote speaker at this year’s conference will be Malcolm Gladwell . . . ”

Finally, from Henrik Loeser, here’s some fun with databases, which I hope will inspire you to practice your craft with care, knowing that it does indeed make a difference, even if it’s only the difference between cereal and silverware.

Till next time!



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Could MySQL be pigeon holed by Oracle love?
« Reply #10 on: July 17, 2009, 06:01:23 PM »
Could MySQL be pigeon holed by Oracle love?
17 July 2009, 8:30 am

Image by weboo via Flickr

A while ago, about 16 years ago now, I had a desktop computer.  It wasn’t a PC.  It was an Acorn.  It had an ARM processor in it.  Despite the rest of the world starting going crazy for the new Pentium chip, the Acorn with its ARM processor could run rings about it in terms of computing power.  And it was simple and easy to use, I used to write applications in assembly code for it (and it didn't have a fan!).

Not too long after that Acorn went under, Arm was already off on its own to find a new market.  Its RISC technology was licensed in many different ways.  Despite some isolate cases where the technology was again used on the desktop or even in supercomputers, largely those licensing it didn’t require another desktop processor.  They needed a mobile processor, which ARM’s technology was great for too.  Over time the ARM processors have become well known for their mobile capabilities and their desktop & supercomputer capabilities became less widely known (or cared about).

So why am I telling you all this?

Well as we all know, Oracle has yet to make any public statements about their intentions for MySQL.  Sitting in the Hannah Montana movie with my kids (don’t ask) tonight I was thinking about possible scenarios that could play out.  One of the interesting ones is what happens if Oracle positions MySQL as an entry level database, or as small scale web backend database, and showers it with love and attention, sales & marketing effort in that space. 

Is it possible that MySQL could start to become known for that limited capability only and recognition elsewhere could start to fade?  Would it matter?  Would this make sense, how would this be advantageous to Oracle? 

Rhetorical questions really as I am just thinking out loud, just thinking out loud...





Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Disabling binary logging when restoring a MySQL dump
« Reply #9 on: July 17, 2009, 06:01:22 PM »
Disabling binary logging when restoring a MySQL dump
17 July 2009, 8:30 am

There is a cool option for mysqlbinlog for disabling the binary log when doing recovery using binary logs, namely --disable-log-bin. How, one would think it is also avialable for something like mysqldump or even the mysql CLI? Nope.

There are various ways for doing this, here is one:

shell> (echo "SET SESSION SQL_LOG_BIN=0;"; cat dump.sql) > dump_nobinlog.sql

Obviously, bit pain for really big files, and when dumping to multiple files.

So what is your favorite way for disabling binary logging when restoring a MySQL dump?



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Intruducing Incline - a synchronization tool for RDB shards
« Reply #8 on: July 17, 2009, 06:01:22 PM »
Intruducing Incline - a synchronization tool for RDB shards
14 July 2009, 1:51 am

For the last weeks, I have been writing a tool called "Incline," a program that automatically maintains consistency between sharded MySQL databases.  The aim of the software is to free application developers from hand-writing code for keeping consistency between RDB nodes, so that they can concentrate on writing the application logic.

Background

Denormalization is unavoidable in a sharded RDB environment.  For example, when a message is sent from a user to another user, the information should be stored on a database node where the sender of the message belongs to, and on another node where the receiver does.  In most cases, denormalization logic is hand-written by web application developers, and since it has been a burden for creating large-scale web services.  Incline takes off the load from developers.  By reading the definition files, Incline keeps the tables on a sharded MySQL environment in sync, by providing a trigger-generator and a replication program that, synchronously or asynchronously reflects the changes of source tables into materialized-view-like tables.

Installing Incline

Incline is written in C++ and uses autotools for building the tools.  However, since I have not yet added automatic lookup for libmysqlclient, you need to specify their location manually to build Incline.  My build procedure is as follows.

% svn co http://kazuho.31tools.com/svn/incline/trunk incline

% cd incline

% autoreconf -i

% ./configure CFLAGS=-I/usr/local/mysql/include/mysql CXXFLAGS=-I/usr/local/mysql/include/mysql LDFLAGS='-L/usr/local/mysql/lib/mysql -lmysqlclient'

% make

% make install

Defining Replication Rules

Replication rules of Incline are written using JSON files.  Consider creating a twitter-like microblog on a shared environment.  It would be consisted of four tables, each of them distributed through RDB shards by the user_id column.  The "tweet" table and "following" table are updated by user actions, while "followed_by" table and "timeline" table are denormalized tables that need to be kept synchronized to the former two.

CREATE TABLE tweet (

  tweet_id INT UNSIGNED NOT NULL AUTO_INCREMENT,

  user_id INT UNSIGNED NOT NULL,

  body VARCHAR(255) NOT NULL,

  ctime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,

  PRIMARY KEY (tweet_id)

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE following (

  user_id INT UNSIGNED NOT NULL,

  following_id INT UNSIGNED NOT NULL,

  PRIMARY KEY (user_id,following_id)

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE followed_by (

  user_id INT UNSIGNED NOT NULL,

  follower_id INT UNSIGNED NOT NULL,

  PRIMARY KEY (user_id,follower_id)

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE timeline (

  user_id INT UNSIGNED NOT NULL,

  tweet_user_id INT UNSIGNED NOT NULL,

  tweet_id INT UNSIGNED NOT NULL,

  ctime TIMESTAMP NOT NULL,

  PRIMARY KEY (user_id,tweet_user_id,tweet_id),

  KEY user_ctime_tweet_user_tweet (user_id,ctime,tweet_user_id,tweet_id)

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Incline uses two JSON files to define a replication rule.  The first file, microblog.json defines the mapping of columns using directives "pk_columns" (primary key columns) and "npk_columns" (non-pk_columns) between the "source" table(s) and "destination" of the replication.  When merging more than two tables to a single destination, "merge_cond" attribute is used to define the inner join condition(s).

microblog.json

[

  {

    "source"      : [ "tweet", "followed_by" ],

    "destination" : "timeline",

    "pk_columns"  : {

      "followed_by.follower_id" : "user_id",

      "tweet.user_id"           : "tweet_user_id",

      "tweet.tweet_id"          : "tweet_id"

    },

    "npk_columns" : {

      "tweet.ctime" : "ctime"

    },

    "merge"       : {

      "tweet.user_id" : "followed_by.user_id"

    },

    "shard-key"   : "user_id"

  },

  {

    "source"      : "following",

    "destination" : "followed_by",

    "pk_columns"  : {

       "following.following_id" : "user_id",

       "following.user_id"      : "follower_id"

    },

    "shard-key"   : "user_id"

  }

]

The second file shard.json defines the mapping between user_id and RDB nodes.  Range-based sharding (on an integer column) is specified in the example.  Other algorithm currently supported are: range-str-case-sensitive and hash-int.

shard.json

{

  "algorithm" : "range-int",

  "map"       : {

    "0"    : "10.0.1.1:3306",

    "1000" : "10.0.1.2:3306",

    "2000" : "10.0.1.3:3306",

    "3000" : "10.0.1.4:3306"

  }

}

With the definition, tables will be synchorized in the direction described in the figure below (only illustrates two nodes).



Running Incline

To run Incline, queue tables should be created and database triggers need to be installed on each RDB node.  This can be done by calling "incline create-queue" and "incline create-trigger" commands for each node, and the setup is complete.

% incline --mode=shard --source=microblog.json --shard-source=shard.json --database=microblog --mysql-host=10.0.1.1 create-queue

% incline --mode=shard --source=microblog.json --shard-source=shard.json --database=microblog --mysql-host=10.0.1.1 create-trigger

% incline --mode=shard --source=microblog.json --shard-source=shard.json --database=microblog --mysql-host=10.0.1.2 create-queue

% incline --mode=shard --source=microblog.json --shard-source=shard.json --database=microblog --mysql-host=10.0.1.2 create-trigger

...

The installed triggers take care of synchronizing the denormalized tables within each node.  The next step is to run the forwarder (replicator between RDB nodes) for each node, so that the views should be kept in sync.  This can be done by calling "incline forward."The forwarding logic is defined in a way that the data stored in RDB nodes would become eventually synchronized even if either of the RDB nodes or the forwarder unexpectedly dies.  Use of daemontools or alike to automatically (re)start the forwarder is desirable.

% incline --mode=shard --source=microblog.json --shard-source=shard.json --database=microblog --mysql-host=10.0.1.1 forward &

% incline --mode=shard --source=microblog.json --shard-source=shard.json --database=microblog --mysql-host=10.0.1.2 forward &

...

And this is it.  The two tables: "followed_by" and "timeline" are updated automatically when the "tweet" or "following" table is modified.  Application developers do not need to take care of shard consistency any more, all that needs to be remembered is that changes should be written into the latter two (modification to the former two can be disabled by using different access privileges between the web application and the Incline forwarder).

Next Steps

Having started writing code late last month, Incline is still immature.  Current status is somewhere around early beta.  In the next months, I plan to polish up and optimize the code, add PostgreSQL support (currently Incline works only on MySQL), as well as extending it so that it could used together with Pacific, a framework I have been working, that provides dynamic node addition / removal of nodes on a sharded RDB environment.  Thank you for reading, please stay tuned.



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
TYPE= disappears again (MySQL 5.4.4)
« Reply #7 on: July 17, 2009, 12:06:52 PM »
TYPE= disappears again (MySQL 5.4.4)
17 July 2009, 12:42 am

I like the 5.4 developments, overall. It has useful stuff and is being developed and released a reasonable pace. Good progress. While perusing the MySQL 5.4.4 changelog, one particular change drew my attention, since it’s been (re)appearing since 2006. It’s the removal of the TYPE= keyword which was obsoleted since MySQL 4.1 in favour of the ENGINE= syntax in CREATE/ALTER TABLE.

While on the surface it may seem ok to remove the obsolete keyword, there are quite a few apps out there that use it, and that cannot be changed. So these will now be unable to use MySQL 5.4 or beyond. I filed this as a bug in 2006, MySQL bug#17501. If you’re interested in the “history of reappearance”, take a peek at the comments and their timeline. I just put in a new comment to note the 5.4.4 change.

I suppose that a new developer comes along and reckon that removing this keyword is a good idea. But really, why do we need to remove one keyword from the parser? Because that’s all it is. And removing it really does break apps.

Let’s not. Again. Please! And this time please put a comment in the parser source files, referring to the bug#, so that it doesn’t get recycled at a later date. Please just leave it in.



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Percona talks at OSCON
« Reply #6 on: July 17, 2009, 06:11:03 AM »
Percona talks at OSCON
16 July 2009, 10:57 pm

The OSCON 2009 is taking place next week and we have bunch of talks we’re presenting.      I am presenting  Full Text Search with Sphinx,  MySQL Community Patches and Extensions and Goal Driven Performance Optimization.    

Vadim and Ryan  have a talk XTraDB OpenSource Storage Engine for MySQL.

This month OSCON is taking place in Silicon Valley  which is good for me as I do not have to spend the whole week away from home.  Though I would also miss Portland which is Green and beautiful in Summer.

Entry posted by peter |

No comment

Add to: | | | |



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
How NPR is Embracing Open Source and Open APIs
« Reply #5 on: July 17, 2009, 06:11:02 AM »
How NPR is Embracing Open Source and Open APIs
16 July 2009, 8:05 pm

News providers, like most content providers, are interested in having their content seen by as many people as possible.  But unlike many news organizations, whose primary concern may be monetizing their content, National Public Radio is interested in turning it into a resource for people to use in new and novel ways as well.  Daniel Jacobson is in charge making that content available to developers and end users in a wide variety of formats, and has been doing so using an Open API that NPR developed specifically for that purpose.  Daniel will talk about how the project is going at OSCON, the O'Reilly Open Source Convention. Here's a preview of what he'll be talking about.

James Turner:  Can you start by explaining what NPR Digital Media is and what your role with it involves?

Daniel Jacobson:  Sure.  NPR is a radio organization, of course, and the Digital Media Group, of which I'm a part, handles, essentially as I describe it, everything that is publishable by NPR that does not go to a radio.  So that includes the website, podcasts, API, mobile sites, HD radios, anything that has some sort of visual component to it.  So Digital Media as a group is responsible for producing that content, producing all of those distribution channels, managing all of those relationships.

James Turner:  And what is your particular role there?

Daniel Jacobson:  I manage the application development team that is responsible for all the functional aspects of all of the systems, which includes  our CMS, all of the templating engines for the website, for the API, for the podcasts, all of the engines that drive that.

James Turner:  Now NPR is an organization that consists of a lot of member stations kind of flying in close formation.  What's your relationship with the content producers? To what extent do they have their own stuff, and to what extent do you work together?

Daniel Jacobson:  Those member stations are really exactly that; they are members of NPR.  They essentially buy NPR programming.  They're distinct organizations from us.  NPR is a content producer and distributor.  They buy our programming and broadcast it out to the world.  They also have their own corresponding web teams that can take NPR content and also produce their own content and create their own websites.  So in the Digital Media Team, we take a lot of pride and effort in providing services that help those member stations better serve their communities and their listeners and audiences, using NPR content and using their own content.  We work with them to try and satisfy their missions.  And to the extent that they need NPR services or content, we work hard to try and provide those.  The API is one massive step, I think, in making it much easier for them to do what they need to do without a whole lot of intervention from us, where previously they would have to pull in content in much more arduous ways.  So the API, I think, is a step in the right direction to make it more of a self-service model.

James Turner:  Since you've mentioned the API, that's what you're going to be talking about at OSCON.  We've already talked to the New York Times and the way they're opening up their content through APIs.  What are you doing with yours?

Daniel Jacobson:  Well, we launched ours formally at OSCON last year.  And at that time, we essentially opened up our entire archive.  So anything that you can get on npr.org is available through the API, to the extent that we have the rights to distribute it.  There are some rights restrictions, for example, for receiving photos or stories from sources that we have not cleared rights to redistribute.  Those are getting suppressed through a rights filtering engine on our API.  Everything else that you can get on npr.org, you can get through the API.  That includes full text.  It includes images, audio, video, everything like that.  Throughout the last year, we have added more features.  We included the layer of "mix your own podcast", for example, which allows people to not only get the content in audio form, but also to download it as a podcast-type item.  And all of that is available through search terms or totally customized queries.  So what the API really does is it enables people to take the content, make widgets, or do whatever they want with essentially everything that is on npr.org and get to audiences that we are not getting to.

James Turner:  This probably isn't as much of a factor for you because, in some ways, you're not dependent on the same kind of revenue streams as a lot of news and content providers. But when you provide that kind of access, isn't there a bit of a fear that you can get even more to the point where portal sites and aggregators like Google can essentially steal your traffic?

Daniel Jacobson:  Well, I'm glad you brought that up.  We do have terms of use that do mitigate people from essentially creating another archive of NPR content.  So we're not encouraging people to just go forth and take our content and do everything that we're doing.  We want people to use this in a way that is providing a value-add.  We don't want them to archive and duplicate our stuff.  We also want it to be for personal or noncommercial use.  Anybody who wants to use it for commercial use, which would include Google or whomever else, needs to talk to us and set up an arrangement, some sort of contract or agreement with us.  So that said, we do want to encourage people to take this content and do very creative things.  And that was the purpose of opening up.

James Turner:  So it's been open for a while.  What are some of the things you've seen people do with it?

Daniel Jacobson:  Well, one of the most interesting things is we call it the Flubacher app which is essentially an iPhone application that somebody in the world built.  His name is Brad Flubacher.  And he's essentially taking the API content and putting it into this iPhone app.  And you can stream content within the iPhone app, all of our programs or our topics.  And when I say stream, it's essentially doing API queries every time you make a request.  So he's not archiving all of this content.  It's just basically a pass-through engine.  It's been very popular and a very interesting application.  

A lot of our member stations are doing very creative things.  Minnesota Public Radio, for example, just launched their new site.  And they're making extensive use of the API.  North Country Public Radio is another one where they've said that they have, I think, 50 percent of their pages or so have NPR API content on it.  So our member stations are making heavy use.

I've seen a lot of instances of people making code wrappers.  There's a Ruby on Rails code wrapper for our API.  There's a Perl one that someone just created.  So a lot of people are out there doing very clever things with it.  And we're just looking forward to more and more uses.

James Turner:  So obviously you're familiar and probably a fan of open source.  How is NPR using open source technologies?

Daniel Jacobson:  So internally, with exception of our database, all of our systems are employing open source technologies.   I assume that's what you mean.  What open source technologies are we using?  So our database is Oracle.  And our plan is to migrate that to MySQL.  But over the last couple of years, we've really adopted open source more and more.  All of our coding engines, and we're not using any proprietary application servers or anything like that, it's all open source, Apache, all that kind of stuff.

It's very important to us that we keep the open source model. And as we look more towards open source, we're kind of changing our vision to be less of a consumer of the open source products and more of a contributor.  And I think that's what you see with the API.  It's the first step to say, "How can we contribute back to this community and give them the things that we're good at?"  Which would be content, in this case.  And more and more, we're going to start looking towards opening up our applications and saying, "Here, go fork this.  Go make interesting things with it."  And I think over the next couple of months you're going to see a lot more open source applications coming from NPR.

James Turner:  As I mentioned earlier, the New York Times has got their open API.  You have an API.  Is there any effort going on to try to standardize for this type of content a single API that would allow people to use common code throughout all of these data sources?

Daniel Jacobson:  That's a great question.  I'm involved in a resource group for PBCore, as an example.  PBCore was really set up to be a public broadcasting core, but there are a lot of other organizations that are starting to adopt it as more of a standard for passing data back and forth between the organizations.  I'm not sure if that's going to be as pervasive in the overall marketplace.  With respect to New York Times and other organizations that are outside of that circle of PBCore, we actually haven't had many conversations about formalizing some sort of standard across us.  I think that's a very interesting idea.  That said, there are already a host of standards out there in the world.  And NPR has tried with our API to really make the API adhere to as many standards as possible.

We have our own custom tagging language which we call NPRML, which we built, and which is essentially the language or the XML structure that essentially closely mirrors to our content.  But we can now put all of our content in media RSS or podcast RSS or Atom, or I think there are a total of eight or nine total outputs.  And next on the docket will be NewsML and PBCore or probably PBCore first.  And so we're trying to make our content as standards compliant as possible.  

I think your question is, is there some other standard that would allow for more richer content to be standardized across all of these news organizations.  It's a really interesting question.  I don't now that all of the organizations are going to have the philosophy of opening up as much as NPR has.  So, for example, New York Times does not offer full text content in theirs; we do.  Our source is really heavily weighted towards audio and theirs isn't.  So there are going to be some differences across them that make it a little bit more challenging.

But we are collaborating a lot with these organizations.  I also want to add that New York Times and NPR will be hosting a mash-up camp at OSCON on Friday.  And this is an example of one of those steps where we're really trying to play nicely with all of these other organizations and trying to unify in front of the public, you know, "We are both media organizations.  We want to get everybody kind of focused on the same concepts."  I think your proposal of a next step towards a standard of process might come down the road.

James Turner:  What do you see coming on the horizon both for NPR and if you want to put on your oracle hat, more generally in the news business?

Daniel Jacobson:  Well, for NPR, digital is obviously very important as it is for most other media organizations.  And over the next several months, you're going to see a lot of changes for NPR.  We are focusing a lot of energy towards distribution channels, portability.  I think portability is a huge factor in this marketplace.  And you're asking about down the road. My view is I really see webpages and websites, browser-based, PC-based experiences, they're going to start diminishing in importance.  I don't know exactly what the timeframe is.  It could be a couple of years.  It could be five years.  I don't know.  But at some point, it's going to plateau and mobile's going to surpass it.  And having content be portable is going to be paramount.  

So I think that NPR's philosophy is going to mirror with that.  We're putting a big emphasis on portability.  That's why the API is so critical, not only for end-users in the world but also for all of our business needs.  We spend a lot of time with business partners, getting them to understand the API so that they can more easily tap into our content and service in their environment.  So it's all about distribution at this point for us.  And I think over the next three to five years, you're going to see a lot more people consuming NPR content on the go rather than in front of the computer.

James Turner:  It sounds like you're going to be fairly busy at OSCON, but is there anything beyond the stuff that you're participating in that's caught your eye or has you excited?

Daniel Jacobson:  I will be honest.  I'm going to be at OSCON for about a day-and-a-half, and that's because we have some major launches later this month.  So I've got to swing in, do my stuff, and swing out, which is regretful.  But there are a couple sessions that I did notice.  I think there were some talks about microformats and, of course, portability, and HTML 5.  Those were the things that caught my eye.

James Turner:  All right. Well, Daniel, thank you so much for taking the time to talk to us.  And it'll be great to be hearing more from NPR.

Daniel Jacobson:  Great.  Thank you so much.





Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Auto-create relationships for MyISAM tables in MySQL Workbench
« Reply #4 on: July 17, 2009, 06:11:02 AM »
Auto-create relationships for MyISAM tables in MySQL Workbench
16 July 2009, 6:10 pm

Over a chat on the #workbench IRC channel, Collin Cusce has written a handy little Lua script to automatically create relationships (through foreign keys) for his reverse engineered database.

Reverse engineering the DB to import tables into a diagram was easy, but their database used no “hard” foreign keys and an ER diagram without relationships wouldn’t be of much use. So one option would be to individually connect each foreign key column pair by hand, using the relationship picking tool . But doing that for the thirty-something tables in the database would be too much work and something could be overlooked and left out. The other option would be to automate that, since all such foreign keys followed a naming convention like _id or fk_id
. And that led to the following (slightly modified) script that will do just that.

To use the script, run it with the Tools -> Run Script… menu command. It can also be easily modified to suit your needs, in case your DB follows some different naming convention.



Source:Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Delay in Release of Summer Issue OS DB Magazine
« Reply #3 on: July 17, 2009, 12:00:23 AM »
Delay in Release of Summer Issue OS DB Magazine
16 July 2009, 6:06 pm

Due to circumstances beyond my control..evidently involving the power supply of computer handling email for one of the authors .. the magazine has been delayed for a day or two while it gets sorted out. Sorry about that, but it is going to be worth the wait!



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Limit on General Query Log Size?
« Reply #2 on: July 17, 2009, 12:00:23 AM »
Limit on General Query Log Size?
16 July 2009, 5:39 pm

I ran into a rather interesting situation today with a client. It seems that the mysqld daemon stopped with no errors in the error log. I ran through the obvious problems … not enough disk space, memory utilization etc and came up empty.

The server was running MySQL 4.1 on Fedora Core 5. We can save the discussion about running your database on reasonable up to date hardware and operating system for another post. Core 5 runs the GNU/Linux kernel 2.4 along with the ext3 filesystem and so the thought was in the back of my mind that it might be an issue with file size. Well, as Sun’s own documentation shows this shouldn’t be the case.

During the investigation it was uncovered that the general query log was not only enabled but 16 gigabytes in size. Aside from being so large it was absolutely useless for anything, it was the obvious culprit for too large a file. After zapping the log file, it was possible to start the MySQL server  successfully. It was then that I looked up the previously referenced information about file size limits in the kernel and MySQL itself and found out that 16 GB shouldn’t have been a problem.

Checking through http://bugs.mysql.com didn’t uncover anything. I have already thought of how I can reproduce the problem pretty painlessly if it’s a general problem and not something specific to Fedora Core 5. But before I take several hours to run tests I wanted to see if anyone else had heard of this issue.

The rather obvious lessons that the client should take away from this:

1) There was no reason to have the general query log on in the first place and it should have been turned off.

2) If you need to use either slow or general query logging taking the time to set up log rotation.

Ever seen this before? If so please take 30 seconds and let me know. I would really appreciate it. And who knows, it could be one of your servers down the road that could be saved if it really is a bug. Of course you would never let your log files get that large, right? Right?



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!

Offline NewsBot

  • The osCommerce University News Bot
  • Administrator
  • Hero Member
  • *****
  • Posts: 1024
  • Karma: 0
    • View Profile
Planet MySQL
« Reply #1 on: July 17, 2009, 12:00:23 AM »
Cross platform GUI portability, part 2
16 July 2009, 5:21 pm

So, if you read my first post on the subject of cross platform GUIs, you probably think I missed one aspect, which is that on cross platform GUI toolkits. Don't these guys, in one shape or the other, solve this problem? If you ask me, the answer is no, and I will now set up to make my point here.

The problem is that the main issue persists: Do you want your app to look like Windows on Windows and like Mac on OSX. Always. For all intents and purposes? Or do you want your application to look and feel like "your application", on all platforms? Whichever path you choose, there are solid good reasons for both approaches. And that is the issue. wxWindows, Java, wxWidgets, all fine technologies, but they do not solve the basic issue.

Is the solution to make the GUI adaptable to both approaches on all platforms, and have it configurable to support all these aspects? In a sense, I think that is true, but on the other hand I feel that the effort needed to achieve this is terribly counter productive. It really is. It is cool and it looks nice, but in the end, really does little to drive technology forward, in a philosophical sense.

I have seen a few commercial toolkits doing this, with great technical success. The drawback:

These toolkits are usually expensive.Now you have a third API to program against, not Win32, not OSX, but something else. Not to mention Gnome or KDE then.What good does it do to your application, really? I have seen applications built with these toolkits expose a 100% Windows look and feel on Macintosh! And it looks real neat, and achieving the task of having Windows look-and-feel on a Mac is not an easy task, and these toolkits are advanced. But the value for the end user, once the interface they want to use is set? Not much.To an extent, this reminds me of the 1980s, early 1990 mix of different networking technologies, where some vendors were pushing cross networking APIs. Usefule, well, sort of. Valuealble there and then, well, yes, to an extent. Useful in the long run? Nope. Definitively not at all. And then you have all this code, when you realize that TCP/IP is all you need to care about, written not against a TCP/IP library, but against some other kind of weirdo library that has a cost, but which provides you with cross network portability across TCP/IP, IPX/SPX, BanyanVines and DecNet. Which would be useful if it wasn't for the fact that no one is using anything else than TCP/IP.

Things aren't even close to that situation with GUIs, and I will make a shot at a cross platform wxWidgets based app later this year. But for now, I am pretty much convinced that independent how much I try, I have to make compromises between functionality, OS/GUI integration, usability and code overhead.

No, I am NOT going to sprinkle #ifdefs across my code. Yes, I know I need to make compromises.Yes, I am aware that my knowledge on Win16 7 Win32 is of limited use.Despite this, I want to try it. And to easy the minds of my MyQuery users, no, the target for this will NOT be MyQuery, but something else...

/Karlsson

About to go to bed.



Source: Planet MySQL

================================
This post was created by the osCommerce University News Bot.  Feel free to reply, attach polls, etc -- but do not hold the osCommerce University responsible for the content of the post itself.  PM the Administrator for SPAM, thanks!