Both the amount and the availability of water in soil is important to plant roots and soil-dwelling organisms. To describe the amount of water in the soil we use the term water content. To describe the availability we talk of water potential. In thermodynamics, the water content would be referred to as the extensive variable and the water potential as the intensive variable. Both are needed to correctly describe the state of water in soil and plants.
In addition to describing the state of water in the soil, it may also be necessary to know how fast water will move in the soil. For this, we need to know the hydraulic conductivity. Other important soil parameters are the total pore space, the drained upper limit for soil water, and the lower limit of available water in a soil. Since these properties vary widely among soils, it would be helpful to establish correlations between these very useful parameters and easily measured properties such as soil texture and bulk density. This paper will present the information needed for simple models of soil water processes.
We are entering an era of cheap data. Sensor technology has advanced to the point where it has become easy to collect large amounts of measurement data at high spatiotemporal resolution.
Hydroserver map screen: Using an off-the-shelf open source informatics system like Hydroserver kept us from reinventing what’s already out there, but allowed flexibility to program to our own needs.
We are now to the point where we have gigabytes worth of data on soil moisture, plant canopy processes, precipitation, wind speed, and temperature, but the amount of data is so overwhelming that we are having a difficult time dealing with it. The cost of measurement data is dropping so quickly, people are forced to change from a historical mindset where they analyzed individual data points to the mindset of turning gigabytes of data into knowledge.
Because Bioinformatics students are used to working with DNA data, they understand how to write computer programs that analyze large amounts of data in near real-time.
One approach suggested by my colleague Rick Gill, a BYU Ecologist, is to collaborate with bioinformatics students. Because they are used to working with DNA data, these students understand how to write computer programs that analyze large amounts of data in near real-time. Rick came up with the idea to tap these students’ expertise in order to analyze the considerable information he anticipates collecting in our Desert FMP Project, an experiment which will use TEROS 21 and SRS sensors to determine the role of varying environmental and biological factors involved in rangeland fire recovery.
Rick and I are predicting that near real-time data analysis will give us several advantages. First, we need readily available information so we can tell that sensors and systems are working at the remote site. Large gaps in data are common for sites that aren’t visited often, and sensor failures are missed when data are collected but never analyzed. With our new approach, all data are databased instantly, and the results are visualized as we go. Not only that, we’ll be able to control what’s being analyzed as we see what’s happening. We can tell the bioinformatics students what we need as we begin to see the results come in. If we see important trends, we can assign them to analyze new data that may be relevant right away.
These techniques have the potential to help scientists from all disciplines become more efficient at collection and analysis of large data streams. Although we’ve started the process, we have yet to determine its effectiveness. I will post more information as we see how well it is working and as new developments arise.
One of the hardest issues university researchers face today is the lack of funding for lab technicians. Although it’s frustrating that universities are no longer able to support this type of personnel, can technology close the gap? This is a question we’ve tried to answer in our Desert FMP project in collaboration with BYU.
Source: Simplyhired.com. Job listings for Science Lab Technicians have decreased 38% from March 2013-March 2014
I was talking to my colleague, Rick Gill, several weeks ago, and he had this to say about the disappearance of the previously indispensable lab technician: “We have fewer people in the lab, and the people we have are more expensive. We need to be deliberate in how we use their time. If we can make the entire system more efficient using technology, we’ll use the people we have in a way that is meaningful. In ecology right now, one of the things that we’re beginning to recognize is that the typical process where the lab tech would go out and take ten samples and average them is not what’s interesting. What’s interesting is when it’s been dry for four weeks, and you get a big rain event. This is because the average for four weeks is really low for almost all processes, but the data three days after it rains swamps the previous four weeks. So the average condition means almost nothing in terms of the processes we’re studying for global change. We need technology to take the place of the technician who would be monitoring the weather and trying to guess when the big events will occur.”
To capture these pulses in the Desert FMP project, we’re using a continuous monitoring system that communicates feedback directly to us as the principal investigators. Using advanced analysis techniques, we can painlessly assure that data are being collected properly and important events are never missed. Although we don’t have a technician, the goals of the project are still being met.
What do you think? How have you dealt with the disappearance of the lab tech?