Community,
Community,
With more than 200 events submitted and approximately 80 slots to be filled, this has been the most difficult schedule to arrange in the history of PostgresConf. By far, the majority of content received we wanted to include in the schedule. It is that level of community support that we work so hard to achieve and we are thankful to the community for supporting PostgresConf. There is no doubt that the number one hurdle the community must overcome is effective access to education on Postgres. The US 2018 event achieves this with two full days of training and three full days of breakout sessions, including the Regulated Industry Summit and Greenplum Summit.
|
||
We are having yet another PGConf Mini in NYC. The event is scheduled for December 14th, 2017 and Work Bench is hosting:
Efficiently and Safely Propagate Data Changes Without Triggers!
It is with great pleasure that we announce the preliminary program for PGConf Local: Austin!
|
||
|
I work Fred Hutchinson Cancer Research Center and within Fred Hutch, I work for the largest group called SCHARP, Statistical Center for HIV/AIDS Research and Prevention. We use Postgres to monitor the AIDS drug trials real time to see if the trials are working or not. This means we collect the data from doctors and labs around the world, and analyzing the data. We also have servers where we receive data from other research institutes and share our randomized data, de-personalized, with other research institutes.
In 2010 I started SeaPUG, Seattle Postgres Users Group, at the request of Josh Drake. I have at least have the presentations there every year. I have also discovered several PostgreSQL: bugs which have been fixed. Some of them affected every version of PostgreSQL. Bug numbers: 7553, 8173, 8257, 8291. I also found 8545, which has not been fixed but Core has acknowledge needs to be fixed but they are not sure where it should be fixed, pg_dump or pg_dumpall. I started the PostgreSQL track at LinuxFest Northwest in 2014 after my GIS presentation in 2013 was standing room only. This year I got a booth at the SeaGL, Seattle GNU Linux, conference with the idea of having a booth there next year along with also doing a PostgreSQL presentation next year at the conference.
I have been giving presentations locally now for the last 7 years and so I am now ready to move on to the next step, doing presentations at the Local and National conferences around the United States.
People not knowing about PostgreSQL, most people know about MYSQL, MSSQL and Oracle, but do not know about PostgreSQL. This is changing, some, with the Cloud providers now offering PostgreSQL, but I go to these conferences, LinuxFest Northwest and SeaGL, and people all the time are asking me, "What is Postgres and why should I use it over MYSQL, MSSQL or Oracle", because they have never heard of PostgreSQL.
We need to promote PostgreSQL so that new people starting personal projects and starting at companies, will think about PostgreSQL before other databases. This starts with getting the younger generation interested in PostgreSQL and that also means that we need to get the college professors willing to talk about PostgreSQL in their curriculums instead of ignoring PostgreSQL for all the other competing databases. Some of this means that at all the other conferences, we need to have a PostgreSQL presence, aka booth and presentations. We should also come up with a certification method for PostgreSQL DBA's, User's, Engineer's, etc so that prospective employers will have an idea of the prospective employees skill set.
Pivotal Sponsor Highlight Blog for PostgresConf 2019
Written by:
Jacque Istok, Head of Data, Pivotal
1. Greenplum has its own community; what do you hope to achieve by joining the Postgres community and PostgresConf?
Both interest and adoption of Postgres have skyrocketed over the last two years, and we feel fortunate to be a part of the extended community. We have worked very hard to uplevel the base version of Postgres within Greenplum to more current levels and to be active in the Postgres community. We see Greenplum as a parallel (and analytic focused) implementation of Postgres, and we encourage the community to continue to embrace both the technology and the goal of the Greenplum project, which is Postgres at scale.
2. Are you planning to provide any new tech (PG features, etc.)?
This year we plan to announce several new things for both Greenplum and Postgres. We’re introducing new innovations in our cloud offerings in the marketplaces of AWS, Azure, and GCP. We also have major news about both our natural language at-scale analytics solution based on Apache Solr, and our multi-purpose machine learning and graph analytics library Apache MADlib. The next major release of Greenplum is a major focus as well, differentiating Greenplum from each of its competitors and bringing us ever closer to the latest versions of Postgres.
3. Are there any rising stars in the community you’d like to give props to?
While it seems a little self-serving, I would like to take the opportunity to give props to the Pivotal Data Team. This team is a 300+ worldwide organization that helps our customers, our prospects, and the community to solve real world and really hard data problems—solved in part through Postgres technology. They all attack these use cases with passion and truly make a difference in the lives of the people that their solutions touch. I couldn’t wish to work with a finer group.
4. What is the number one benefit you see within Postgres that everyone should be aware of?
The number one benefit of Postgres is really its flexibility. This database chameleon can be used for SQL, NoSQL, Big Data, Microservices, time series data, and much more. In fact, our latest analytic solution, MADLib Flow, leverages Postgres as an operational engine. For example, Machine Learning models created in Greenplum can be pushed into a restful API as part of an agile continuous integration/continuous delivery pipeline easily and efficiently—making Postgres the power behind what I still like to think of as #DataOps.
5. What is the best thing about working with the Postgres community?
I deeply admire the passion and consistency of the community behind Postgres, constantly and incrementally improving this product over decades. And because Greenplum is based on Postgres, we get to interact with this vast community of talent. We are also able to more seamlessly interact with ecosystem products that already work with Postgres, making the adoption of Greenplum that much easier.
6. Tell us why you believe people should attend PostgresConf 2019 in March.
PostgresConf is going to be awesome, and I can’t wait for it to start! With Pivotal, Amazon, and EnterpriseDB headlining as Diamond sponsors, Greenplum Summit (along with multiple other summits), and high-quality speakers and content across the board, this year’s PostgresConf promises to be bigger and better than ever and surely won’t disappoint.
We’re thrilled to be back to present the second annual Greenplum Summit on March 19th at PostgresConf. Our theme this year is “Scale Matters”, and what we’ve seen with our customers is that every year it matters more and more. Our users are part of organizations that are generating tons of data and their need to easily and quickly ingest and interrogate all of it is paramount. This is true even more now than ever before as the insights that can be found not only help differentiate them from their competitors, but are also used to build better products and increase customer loyalty.
The day will be filled with real-world case studies from Greenplum users including Morgan Stanley, the European Space Astronomy Centre, Insurance Australia Group, Purdue University, Baker Hughes (a GE company), Conversant, and others, plus presentations and deep-dive tech sessions for novices and experts alike.