Browsing by Subject "Optimierung"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Publication An economic analysis of the implementation options of soil conservation policies(2008) Schuler, Johannes; Dabbert, StephanThe objective of this study is to analyze the economic and agricultural aspects of the implementation of soil conservation programmes and to suggest appropriate measure-instrument combinations for efficient soil conservation as a decision support for the implementation of soil conservation policies. Emphasis is given to the resource and institutional economics of soil conservation. In the empirical part, the efficiency of policy options is analysed using the example of a region in north-eastern Germany based on model calculations. After an introduction to the topic of soil functions and soil degradation definitions, the implementation concepts for soil protection measures at the international and national level are described. Based on the theoretical economic analysis of soils as a natural resource, the existing property rights, the public good characteristics of soils and the resulting externalities lead to the conclusion that market failure does exist. Therefore, the non-market coordination of soil use is justified. A cost-effectiveness analysis was derived based on the theory of ?safe-minimum-standard? for the appropriate assessment of the implementation options of soil conservation policies. A fuzzy-logic-based method, which is based on an expanded Universal Soil Loss Equation approach (USLE), was applied for the assessment of soil erosion risk in the sample region. The approach considers both the natural conditions and the characteristics of the cropping practice. The very detailed description of the cropping practices allowed for the specific assessment of erosion relevant effects. This, in combination with the high detail site descriptions provided this study with a very precise regional approach. The regional decision-support system MODAM (multi-objective decision support tool for agro-ecosystem management) was applied for the assessment of the economic and environmental impacts of different policy options. The policy scenarios examined include a CAP reform scenario with decoupled payments in accordance with the proposed conditions of the year 2013. This scenario was used as the reference scenario for the other possible scenarios of soil conservation policies. The three main scenarios for the policy options are 1) a non-spatially oriented, 2) a spatially-oriented incentive programmes for reduced tillage practices and 3) a regulation scenario that prohibits the cultivation of highly erosive crops (row crops) on erodible soils. The prohibition of row crops on highly erodible soils led both to lower on-farm costs and lower budget costs in comparison to the incentive programmes for reduced tillage. All three scenarios had comparable reduction in soil erosion. Based on the modeling results the ban on row crops on highly erodible sites is therefore the preferable option in terms of the cost-effectiveness ratio. The inclusion of transaction costs in this study helps expand the scope of policy analysis, for the total costs of a policy would be underestimated if only the budget costs for the direct payments to farmers were considered. Transaction costs understood as a cost for the (re-) definition and implementation of property rights can reach substantial amounts and reduce the total efficiency of a policy. The results of the qualitative analysis of the transaction costs of the study policy options also supported the option of row crop regulation on highly erodible soils. A model that serves as decision support for both the economic and agricultural aspects of soil conservation had been successfully developed in this study. Different policy options were analysed for a cost-effective solution of soil conservation programmes. Based on the final discussion on the involved transaction costs, the regulatory approach (a spatially-focussed ban on row crops) was shown to be the most cost-effective option with potentially lower transaction costs. The main criteria for a cost-effective policy design are high efficiency in both the agricultural measures (practices) and the spatial correlation between the programme area and the high erosion risk areas. Incentive programmes in combination with less effective agricultural practices showed a worse cost-benefit ratio for the sample area than the regulation approach, which is based on more effective agricultural practices.Publication Application of nature-inspired optimization algorithms to improve the production efficiency of small and medium-sized bakeries(2023) Babor, Md Majharul Islam; Hitzmann, BerndIncreasing production efficiency through schedule optimization is one of the most influential topics in operations research that contributes to decision-making process. It is the concept of allocating tasks among available resources within the constraints of any manufacturing facility in order to minimize costs. It is carried out by a model that resembles real-world task distribution with variables and relevant constraints in order to complete a planned production. In addition to a model, an optimizer is required to assist in evaluating and improving the task allocation procedure in order to maximize overall production efficiency. The entire procedure is usually carried out on a computer, where these two distinct segments combine to form a solution framework for production planning and support decision-making in various manufacturing industries. Small and medium-sized bakeries lack access to cutting-edge tools, and most of their production schedules are based on personal experience. This makes a significant difference in production costs when compared to the large bakeries, as evidenced by their market dominance. In this study, a hybrid no-wait flow shop model is proposed to produce a production schedule based on actual data, featuring the constraints of the production environment in small and medium-sized bakeries. Several single-objective and multi-objective nature-inspired optimization algorithms were implemented to find efficient production schedules. While makespan is the most widely used quality criterion of production efficiency because it dominates production costs, high oven idle time in bakeries also wastes energy. Combining these quality criteria allows for additional cost reduction due to energy savings as well as shorter production time. Therefore, to obtain the efficient production plan, makespan and oven idle time were included in the objectives of optimization. To find the optimal production planning for an existing production line, particle swarm optimization, simulated annealing, and the Nawaz-Enscore-Ham algorithms were used. The weighting factor method was used to combine two objectives into a single objective. The classical optimization algorithms were found to be good enough at finding optimal schedules in a reasonable amount of time, reducing makespan by 29 % and oven idle time by 8 % of one of the analyzed production datasets. Nonetheless, the algorithms convergence was found to be poor, with a lower probability of obtaining the best or nearly the best result. In contrast, a modified particle swarm optimization (MPSO) proposed in this study demonstrated significant improvement in convergence with a higher probability of obtaining better results. To obtain trade-offs between two objectives, state-of-the-art multi-objective optimization algorithms, non-dominated sorting genetic algorithm (NSGA-II), strength Pareto evolutionary algorithm, generalized differential evolution, improved multi-objective particle swarm optimization (OMOPSO) and speed-constrained multi-objective particle swarm optimization (SMPSO) were implemented. Optimization algorithms provided efficient production planning with up to a 12 % reduction in makespan and a 26 % reduction in oven idle time based on data from different production days. The performance comparison revealed a significant difference between these multi-objective optimization algorithms, with NSGA-II performing best and OMOPSO and SMPSO performing worst. Proofing is a key processing stage that contributes to the quality of the final product by developing flavor and fluffiness texture in bread. However, the duration of proofing is uncertain due to the complex interaction of multiple parameters: yeast condition, temperature in the proofing chamber, and chemical composition of flour. Due to the uncertainty of proofing time, a production plan optimized with the shortest makespan can be significantly inefficient. The computational results show that the schedules with the shortest and nearly shortest makespan have a significant (up to 18 %) increase in makespan due to proofing time deviation from expected duration. In this thesis, a method for developing resilient production planning that takes into account uncertain proofing time is proposed, so that even if the deviation in proofing time is extreme, the fluctuation in makespan is minimal. The experimental results with a production dataset revealed a proactive production plan, with only 5 minutes longer than the shortest makespan, but only 21 min fluctuating in makespan due to varying the proofing time from -10 % to +10 % of actual proofing time. This study proposed a common framework for small and medium-sized bakeries to improve their production efficiency in three steps: collecting production data, simulating production planning with the hybrid no-wait flow shop model, and running the optimization algorithm. The study suggests to use MPSO for solving single objective optimization problem and NSGA-II for multi-objective optimization problem. Based on real bakery production data, the results revealed that existing plans were significantly inefficient and could be optimized in a reasonable computational time using a robust optimization algorithm. Implementing such a framework in small and medium-sized bakery manufacturing operations could help to achieve an efficient and resilient production system.Publication Characterization of the effects of chia gels on wheat doughand bread rheology as well as the optimization of breadroll production with the Nelder-Mead simplex method(2016) Zettel, Viktoria; Hitzmann, BerndChia (Salvia hispanica L.) is becoming increasingly popular as ingredient for baked goods. The aim of the first part of this thesis was to investigate the influence of gel from ground chia on the rheology of different wheat dough systems and the resulting baked goods. The evaluated products were wheat bread and sweet pan bread. The effects of chia incorporated as gel in wheat bread dough as hydrocolloid were characterized using empirical and fundamental rheological methods and differential scanning calorimetry. To avoid competition of starch and ground chia with respect to the water uptake, chia was incorporated as gel. The gel was prepared of ground chia with 5 g/g and 10 g/g water, respectively. The doughs were prepared with 1-3 % chia related to the amount of wheat flour. The effects of gel from ground chia were studied also as fat replacer in sweet pan breads. The main focus of the work was to study the effects of the fat substitution on the dough rheology. The dough rheology was characterized using a rotational rheometer and a Rheofermentometer. The end products were evaluated with a texture analyser and two samples were additionally evaluated with respect to their fatty acid profile. The substitution was secondly addressed to reduce the total amount of fat in the product and to improve the nutritional value of the products regarding the fatty acid composition. The fat was replaced in four steps, and the ratio among the ingredients was held constant to ensure a better comparability. Within this thesis it was shown that addition of gel from ground chia can affect wheat doughs and the resulting baked products in a positive way. The approach of using ground chia as gel seems to be fruitful to avoid competition between starch and chia with respect to the water uptake while the crumb formation during the baking process takes place. The evaluation of the pasting profiles of wheat flour suspensions with chia gel addition reinforced this assumption. The gel from ground chia affected the pasting properties in a way that the viscosities decreased with increasing amount of chia. The rheological properties of the doughs were affected in negative ways with respect to further processing by the addition of too high amounts of chia gel. The dough stability was reduced and the resulting baked products were less and irregular porous and therefore compact. All doughs showed weakening regarding the rheometer measurements, however the linear viscoelastic region was not affected. The frequency sweep measurements showed for all doughs a decrease with increasing content of gel from ground chia. The creep-recovery tests of the sweet pan bread doughs revealed that the zero viscosity η0 decreased and the creep compliance J0 increased with increasing chia gel content. The weakening of the doughs may not absolutely be caused by the incorporated chia, but by the additional water. There seems to be a kind of interaction between ground chia particles, wheat flour constituents and water, because nearly the same results were achieved for 2 % and 1 % of ground chia with 5 g/g and 10 g/g water, respectively. These experiments lead also to the best results for incorporating gel from ground chia to wheat breads. The best results for sweet pan breads were obtained with 25 % fat replacement through gel from ground chia. This gel was prepared of 2.3 g ground chia with 5 g/g water. Summarizing the incorporation of defined amounts of gel from ground chia has a positive effect on the rheology and the resulting baked products. The retrogradation of the baked products was decreased over storage and the dietary fibre content was increased. Thus chia acts like a hydrocolloid. The nutritional values of the evaluated baked products, wheat bread and sweet pan bread, were increased. For the sweet pan breads an increase of omega-3 fatty acids was determined. The resulting best sweet pan bread exhibited an amount of 5 % linolenic acid. Gel from ground chia can therefore be incorporated into bakery products as hydrocolloid and for improving the nutritional values regarding the dietary fibre and omega-3 fatty acid contents. Another part of the work was the optimization of the production parameters, proofing time and baking temperature, for bread rolls. The optimization was performed with the Nelder-Mead simplex method. The optimization was necessary for a new oven type, where the oven walls were coated with a ceramic, that increased the infrared radiation during the baking process. The quality criterion for the optimization were the specific volume, the baking loss, the colour saturation, crumb firmness as well as the elasticity of the bread rolls. Within 11 experiments the optimal baking result defined by the results of a conventional oven was obtained. The optimal processing parameters for the bread rolls were a proofing time at 117 minutes and a baking temperature of 215 °C for 16 minutes.Publication Climate variability, social capital and food security in Sub-Saharan Africa : household level assessment of potential impacts and adaptation options(2015) Assfaw, Tesfamicheal Wossen; Berger, ThomasClimate variability and poor distribution of rainfall often causes serious agricultural production losses and worsens food insecurity. Given that the direct effects of climate change and variability are transmitted through the agricultural sector, improving farm households capacities to adapt to the adverse effects of climate-related shocks is an important policy concern. This thesis applied a stochastic Agent-based Model (ABM) that is capable of simulating the effects of different adaptation options by capturing the dynamic changes of climate and prices, as well as the dynamic adaptive process of different farm households to the impacts of these changes. The agent-based simulations conducted in this thesis address the special challenges of climate and price variability in the context of small-scale and subsistence agriculture by capturing non-separable production and consumption decisions, as well as the role of livestock for consumption smoothing. To ensure the reliability and usefulness of results, the model was validated with reference to land-use and overall poverty levels based on observed survey values. In particular, the study used disaggregated socio-economic, price, climate and crop yield data to quantify the impacts of climate and price variability on food security and poverty at the household level. Furthermore, the study explicitly captured crop-livestock interactions and the “recursive” nature of livestock keeping when examining the effects of climate and price variability. The thesis additionally examined how specific adaptation strategies and policy interventions, especially those related to the promotion of credit, improved seed varieties, fertilizer subsidy and off-farm employment, affect the distribution of household food security and poverty outcomes. In addition to impacts on household food security and poverty, the study further considered indirect impacts through changes in the price of agricultural inputs and livestock holding. In terms of coping strategies, the simulation results in this thesis show that the effects of climate and price variability on consumption are considerable, but smaller for those households with relatively large livestock endowments. In addition, the study also found that farm households with a large plantation area of eucalyptus were able to cope with the effects of variability. Therefore, our results suggest that self-coping strategies are important but not sufficient and should be complemented with appropriate policy interventions. In terms of policy interventions, the study found that policy intervention through the expansion of credit and fertilizer subsidy along with innovation through the promotion of new crop varieties that are resilient and adapted to local conditions are the most effective adaptation options for the case of Ethiopia. In addition, the simulation results underscore that adaptation strategies composed of a portfolio of actions (such as credit and fertilizer subsidy along with new technologies) are more effective compared to a single policy intervention. For Ghana, the study suggests that if expansion of production credit is complimented by irrigation, it can provide a way to achieve food security under climate and price variability. In order to design a best-fit intervention instead of a ‘one size fits all’ approach, it is important to capture the distribution of effects across locations as well as households. The great strength of this study is its agent-based nature, which enables exploration of how effects are distributed across farm households. The simulation results clearly show that poor farms are vulnerable to climate and price variability, under which they suffer food insecurity, while a small group of wealthy farms are better off due to higher prices achieved when selling crops. The result from this thesis further underscores the need for improving adaptive capacity, as a large proportion of farm households are unable to shield themselves against the impacts of price and climate variability. In what follows, the study further applied standard micro-econometric techniques to examine the role of social capital and informal social networks on consumption insurance and adoption of risk mitigating land management practices. In particular, the thesis provides evidence of the effects of different dimensions of social capital on the adoption of soil and water conservation practices across households holding different levels of risk-aversion. The results of the study underscore that social capital plays a significant role in enhancing the adoption of improved farmland management practices and suggests that the effect of social capital across households with heterogeneous risk taking behaviour is different. Finally, by combining household panel data, weather data, self-reported health shocks and detailed social capital information, the last section is able to analyze how social capital buffers some of the implications of weather shocks.Publication Entwicklung von datengetriebenen Auswerteverfahren zur Analyse und Schätzungder Reaktorleistung von Biogasanlagen(2020) Beltramo, Tanja; Hitzmann, BerndThe production of biogas is very complex process, which runs in some stages involving different microorganisms. Microbiological diversity of the process depends mainly on the composition of substrate and ambient conditions, such as process temperature. The fact is, the development and composition of the microbiological communities of the process are difficult to predict. Thus, the control and evaluation of such complex biological processes are very time consuming and expensive. In Germany the evaluation of the biogas plants can be performed according to the VDI-Norm 4630, which describes the methods for the evaluation of fermentation of organic materials including characterization of the substrate, sampling, collection of material data and fermentation tests. For that specially equipment and skilled personnel are required. Moreover, the evaluation procedure is very time consuming. That is why a new state-of-the-art alternative for the evaluation purposes is necessary to simplify and to speed up the assessment of the biogas production processes. The aim of this doctoral thesis is the development of a fast and reliable method for the evaluation of the biogas production processes. Therefore the mathematical modelling should identify significant process variables able to evaluate the whole process. For the optimization of mathematical models metaheuristic tools were used. In this doctoral thesis two different data sets were used – experimental data and simulated data. The experimental data were collected in projects “Biogas-Biocoenosis” (FKZ 22010711, Dr. Michael Klocke, Leibnitz-Institute für Agrartechnik und Bioökonomie e.V., Potsdam) and “Biogas-Enzyme” (FKZ 22027707, Dr. Monika Heiermann, Leibnitz-Institute für Agrartechnik und Bioökonomie e.V., Potsdam). The simulated data set was generated using the Anaerobic Digestion Model No.1 (ADM1). The chemical process variables were used as the independent process variable set, while the biogas production output represented the dependent process variable. Prediction of the biogas production was done using linear and nonlinear mathematic models. Here, Partial-Least-Square-Regression (PLSR), Locally-Weighted-Regression (LWR) and Artificial Neural Networks (ANN) were implemented. In order to identify the most significant undependable process variables optimization algorithms were used, Ant Colony Optimization (ACO) and Genetic Algorithm (GA). Prediction capacity was evaluated using two model evaluation variables, Root Mean Square Error (RMSE) and Coefficient of Determination (R2). Figure 1 in Supplementary represents the flow chart of the developed methodology applied for ADM1 generated data set. In Figure 2 (Supplementary) there is a flow chart of the developed methodology applied for the experimentally collected data. The developed approaches could be successfully used for the prediction of the desired process variable, biogas production rate. The variable selection done with the help of metaheuristic optimization algorithms improved the prediction results and reduced number of the independent process variables. Hydraulic retention time, dry matter, neutral detergent fibre, acid detergent fibre and n-butyric acid were identified as the most significant ones. The best prediction was obtained using ANN models. Here, the error of prediction was low and the coefficient of determination high. The successful implementation of the developed methodology proved mathematical models to be an effective alternative method capable to evaluate and to optimize complicated biological processes. Furthermore, it would be mandatory further experimental evaluation of the developed strategy, using the model-based process information.Publication Investigation of fluidised bed coating : measurement, optimisation and statistical modelling of coating layers(2017) van Kampen, Andreas; Kohlus, ReinhardFluidised bed coating describes a process to encapsulate particles. The coating layer is applied in order to protect the core material from chemical reactions with the environment, to control the release of drugs or to mask bad taste. Depending on the application, the coating layer must fulfil various quality requirements, such as completeness, homogeneity and minimum layer thickness. The measurement of the coating layer thickness is therefore necessary in order to determine appropriate parameters for an optimal coating process. This, however, is difficult in the investigated core particle size range of 100 to 500 μm with a coating layer thickness of around 10 μm. Fluorescent imaging of sliced particles or imaging of optical slices using confocal laser scanning microscopy are possible ways to make the coating layer visible and to measure the coating layer thickness using image analysis techniques. This leads to detailed images of the coating layer and an accurate description of the coating layer thickness distribution, but is rather time consuming due to tedious sample preparation and long image acquisition times. Consequently only relatively few particles are measured and used to draw conclusions on the population. Other methods like measurement of the change of particle size using laser diffraction or assessment of the volume ratio of coating to core material usually only deliver the mean thickness and no information on completeness and homogeneity of the coating. In the first part of this thesis a quick method for coating thickness measurement was developed based on a dissolution test. Sodium chloride was used as a core material and maltodextrin DE21 was used as a coating material. When dissolved in deionised water, sodium chloride raises the conductivity in contrast to maltodextrin. Therefore, the measurement of conductivity can be used to assess the dissolution curve of the core material. The coating layer delays the dissolution of the core and by comparison with the dissolution curve of pure sodium chloride the coating thickness distribution can be assessed by deconvolution. It was shown that this method is well reproducible and delivers reliable results comparable to other methods. The method is fast, which enables the measurement of many samples with replicates and using appropriate sample division should provide a good representation of the population. The shape of the thickness distribution allows the quantification of the three aforementioned quality parameters. The method was therefore used in the second part of this thesis in order to investigate the coating process using design of experiments. The four factors spray rate, air temperature, air velocity and concentration of the coating solution were investigated using a central composite design of experiments. The dissolution method was used to assess the coating quality. The particle size distribution was measured in order to quantify the agglomeration rate and the mass of deposited coating material was assessed by quantifying a tracer colour in order to assess the efficiency of the process. Significant quadratic models were fitted to all response variables. These were successfully used to find a local optimum within the investigated parameter space which allowed the formation of an optimal coating layer within a short time frame. The results of the previous investigations showed that the thickness distribution can be well described by a Weibull distribution. Furthermore, it was possible to confirm effects that were previously described in the literature, i.e. that a low concentration of the coating solution leads to more homogeneous coating layers. In order to give a general description of the coating layer, a statistical model of the coating thickness distribution was developed in the third part of this thesis and verified by a Monte-Carlo simulation. The model reproduces the experimentally determined effect of the concentration of the coating solution qualitatively and is able to calculate the mean thickness distribution with given concentration, contact angle, sprayed mass and core particle and droplet size. Appropriate adjustments of these parameters lead to a good agreement between the model and measured thickness distributions of real experiments. It was concluded that predominant spray drying of small droplets and an increase of concentration of the remaining droplets due to pre drying negatively affects the homogeneity of the coating layer. It was further confirmed that the Weibull distribution can be used to describe the coating layer thickness in the investigated thickness range. The thickness distribution transitions from the Weibull distribution to a normal distribution as the coating becomes thicker. Thin coatings with defects can be described by a clinched Weibull distribution containing the uncoated area fraction as an offset.Publication Modelle und Lösungsverfahren zur langfristigen Planung der Stromproduktion einer flexiblen Biogasanlage unter Berücksichtigung von Verschleiß(2021) Butemann, Hendrik; Schimmelpfeng, KatjaOne of the most important measures against climate change is the shift from fossil to renewable energies. Many countries have therefore made it their goal to increase the share of renewable energies for electricity generation. In Germany, the share in 2019 was 40.2%, of which biomass accounted for 20.6%. This category includes biogas plants, which, unlike other sources of renewable energy, have the advantage of not being dependent on certain weather conditions. They are considered a flexible option for electricity generation because they can produce electricity when neither the sun is shining nor the wind is blowing. When the first biogas plants were put into operation, revenues from electricity production could be maximized by having the combined heat and power unit (CHP) associated with the biogas plant generate electricity continuously. To take advantage of the flexibility of biogas plants, German legislators introduced premiums that contained incentives to produce electricity during periods of low supply from other renewable energy sources. Since then, biogas plant operators have been able to maximize their revenues when the CHP produces electricity on demand, i.e., in start-stop mode. However, a large number of starts and stops of the CHP causes altered wear and tear and must be taken into account in the long-term planning of the electricity production of a biogas plant. The aim of this dissertation is therefore to use operations research methods to develop cyclical electricity production plans for biogas plants that take into account the wear and tear of the CHP and the timing and costs of maintenance activities in order to support biogas plant operators in maximizing their revenues. For this purpose, first a classification of electricity production planning of biogas plants into the planning tasks along the biomass-based supply chain is given. Subsequently, the basics of biogas plants are explained, which include their relevance in Germany, their way of operation, service and maintenance as well as the legal framework for their operation. The research gap, which is filled by this dissertation, results from the literature review on quantitative approaches for the operation of biogas plants. It shows that there is still no research work that sufficiently addresses the wear and tear of CHP in flexible operation and the planning of maintenance activities in connection with electricity production. Therefore, a conceptual optimization model is developed that accurately replicates the non-linear wear that occurs in reality and thus enables simultaneous planning of electricity production and maintenance activities. For better applicability with standard solvers, the model is additionally linearized. A case study based on real-world data reveals that a flexible biogas plant achieves higher total revenues than a continuously operated biogas plant under the conditions prevailing in Germany, even when maintenance costs are taken into account. The conceptual optimization model is then extended to produce a cyclical plan that biogas plant operators can apply on a weekly basis. In the following chapter, a greedy heuristic for generating a starting solution as well as a genetic algorithm and a tabu search are developed with the goal of reducing the computation time when solving the extended model. For this purpose, the basics of the individual solution methods are first explained and the input data are adapted to the problem with the help of parameter tuning. An extensive numerical study, in which the input parameters electricity prices, costs for maintenance activities, wear and tear of the CHP and biogas storage capacity are varied, compares the performance of the methods with that of the extended optimization model. In all scenarios, the tabu search determines the best result in low runtime. A summary and an outlook on further research opportunities conclude the dissertation.Publication Optimizing the development of seed-parent lines in hybrid rye breeding(2001) Tomerius, Alexandra-Maria; Geiger, Hartwig H.In hybrid rye breeding, seed-parent and pollinator lines are developed from two divergent gene pools. Line development comprises selection for line performance per se followed by selection for combining ability to the opposite gene pool. Cytoplasmic-genic male sterility (CMS) is employed as hybridizing mechanism. This study deals with model calculations aiming to optimize and compare alternative schemes of seed-parent line development in hybrid rye breeding on the basis of their expected selection gain per year in an index comprising the most important breeding objectives. Prediction of selection gains rests on current estimates of quantitative-genetic and economic parameters. The schemes are optimized for the number of candidates, testers to assess testcross performance, test locations, and replicates at the individual selection stages. Optimization is carried out assuming a fixed annual budget. Five schemes are investigated which differ in the basic genetic material assumed, in the type of test units and the number of selection stages for line and testcross selection, and in the length. The standard scheme employs second cycle material. First, S2-lines are evaluated per se. Selection for combining ability is then carried out at two stages employing testcross progenies of the CMS analogues of the candidate lines in backcross generations BC1 resp. BC2. The first alternative scheme employs an additional stage of BC1L-testcross selection. Another scheme is suited for developing seed-parent lines from broader-based population material. In addition to these 'conventional' methods, a scheme using doubled haploid lines is investigated as well as a scheme in which testcross progenies are produced by means of a gametocide instead of CMS. The optimum dimensioning and relative efficiency of the schemes is investigated for various genetical and economical situations.Publication Optimum schemes for hybrid maize breeding with doubled haploids(2011) Wegenast, Thilo; Melchinger, Albrecht E.In hybrid maize breeding, the doubled haploid technique is increasingly replacing conventional recurrent selfing for the development of new lines. In addition, novel statistical methods have become available as a result of enhanced computing facilities. This has opened up many avenues to develop more efficient breeding schemes and selection strategies for maximizing progress from selection. The overall aim of the present study was to compare the selection progress by employing different breeding schemes and selection strategies. Two breeding schemes were considered, each involving selection in two stages: (i) developing DH lines from S0 plants and evaluating their testcrosses in stage one and testcrosses of the promising DH lines in stage two (DHTC) and (ii) early testing for testcross performance of S1 families before production of DH lines from superior S1 families and then evaluating their testcrosses in the second stage (S1TC-DHTC). For both breeding schemes, we examined different selection strategies, in which variance components and budgets varied, the cross and family structure was considered or ignored, and best linear unbiased prediction (BLUP) of testcross performance was employed. The specific objectives were to (1) maximize through optimum allocation of test resources the progress from selection, using the selection gain (ΔG) or the probability to select superior genotypes (P(q)) as well as their standard deviations as criteria, (2) investigate the effect of parental selection, varying variance components and budgets on the optimum allocation of test resources for maximizing the progress from selection, (3) assess the optimum filial generation (S0 or S1) for DH production, (4) compare various selection strategies - sequential selection considering or ignoring the cross and family structure - for maximizing progress from selection, (5) examine the effect of producing a larger number of candidates within promising crosses and S1 families on the progress from selection, and (6) determine the effect of BLUP, where information from genetically related candidates is integrated in the selection criteria, on the progress from selection. For both breeding schemes, the best strategy was to select among all S1 families and/or DH lines ignoring the cross structure. Further, in breeding scheme S1TC-DHTC, the progress from selection increased with variable sizes of crosses and S1 families, i.e., larger numbers of DH lines devoted to superior crosses and S1 families. Parental cross selection strongly influenced the optimum allocation of test resources and, consequently, the selection gain ΔG in both breeding schemes. With an increasing correlation between the mean testcross performance of the parental lines and the mean testcross performance of their progenies, the superiority in progress from selection compared to randomly chosen parents increased markedly, whereas the optimum number of parental crosses decreased in favor of an increased number of test candidates within crosses. With BLUP, information from genetically related test candidates resulted in more precise estimates of their genotypic values and the progress from selection slightly increased for both optimization criteria ΔG and P(q), compared with conventional phenotypic selection. Analytical solutions to enable fast calculations of the optimum allocation of test resources were developed. This analytical approach superseded matrix inversions required for the solution of the mixed model equations. In breeding scheme S1TC-DHTC, the optimum allocation of test resources involved (1) 10 or more test locations at both stages, (2) 10 or fewer parental crosses each with 100 to 300 S1 families at the first stage, and (3) 500 or more DH lines within a low number of parental crosses and S1 families at the second stage. In breeding scheme DHTC, the optimum number of test candidates at the first stage was 5 to 10 times larger, whereas the number of test locations at the first stage and the number of DH lines at the second stage was strongly reduced compared with S1TC-DHTC. The possibility to reduce the number of parental crosses by selection among parental lines is of utmost importance for the optimization of the allocation of test resources and maximization of the progress from selection. Further, the optimum allocation of test resources is crucial to maximize the progress from selection under given economic and quantitative-genetic parameters. By using marker information and BLUP-based genomic selection, more efficient selection strategies could be developed for hybrid maize breeding.Publication Process, structure and function relationship in ground meat(2023) Berger, Lisa Marie; Weiss, JochenGround beef has enjoyed high popularity with consumers because it is convenient to use and facilitates a rapid preparation of a large variety of different meals. In the production of ground meat, the particle size of the meat is systematically reduced, and the cell structures are partially disintegrated. Ideally, the original cellular meat or fat structure is preserved as much as possible so that important quality attributes are optimized. However, the effect of varying conditions and parameters in modern processes on the quality of ground meat has not yet been investigated in detail. According to the current German “Leitsätze für Fleisch und Fleischerzeugnisse”, hamburgers must not contain more than 20 Vol.% of non-intact cell structures to be sold without further declaration. Therefore, this work aimed to identify process, structure, and function relationships in ground meat production to facilitate a gentler processing of in particular hamburgers. To investigate these effects systematically, a standardized production method for hamburgers was developed and a pilot plant scale meat grinder was set up with the possibility to record process-relevant data. The relationship between the structure and functionality of ground meat was investigated using a model system with increasing amounts of added meat batter to simulate changes in meat structure due to cell disintegration. A new term, i.e., the amount of non-intact cells (ANIC), was introduced to quantify the amount of disintegrated meat cells during processing. It was shown that changes in the structure due to a higher or lower ANIC resulted in altered physicochemical and functional properties of the ground meat system. The effect of frozen meat content and temperature on the structure and function of hamburgers was investigated to verify the above-obtained correlation to an application-relevant setting. As the specific cutting resistance is significantly higher in frozen than in chilled meat, it was assumed, that the impact on the ground meat’s structure and function differed accordingly. Indeed, this could be verified. In hamburger manufacturing, it is common practice to re-fed imperfectly molded patties, e.g., in a frozen, coarsely crushed state. In contrast to those findings, the use of up to 20 % re-fed material in hamburger manufacturing did not result in any noticeable differences as neither the specific mechanical energy input (SME) nor the ANIC was significantly changed. It was thus demonstrated, that some raw material variations can have an impact on both structure and function of hamburgers. Especially, temperature effects and associated changes in the cutting resistance of the raw material had the strongest influence on structure and function of ground meat. However, if structural differences were found, they were not sufficient to manifest in differences in sensory evaluation. This means that the consumer perception and thus the quality of the hamburger was not influenced. The process parameters and their impact on the structure and function of hamburgers were studied by investigating the impact of the four main processing steps pre-grinding, mixing, grinding, and forming. An increased ANIC was determined with progressive processing, whereby the grinding steps accounted for the strongest increase. Mixing and forming were of minor importance for structural and functional changes. By varying the cutting set parameters, the influence of the cutting set compositions on the structure and function of hamburgers was assessed. The SME and the ANIC increased if more cutting levels were used due to higher shear stress applied to the meat. However, the hole plate properties did cause no or only negligible changes in the ANIC and SME. Although an impact of the cutting set composition on the structure could be found, no or only marginal effects on the function and the sensory and optical quality of the hamburgers were found. It can therefore be concluded that the shear forces acting on the meat during grinding have the strongest influence on the structure and function of beef. By reducing the acting shear forces, the grinding can be designed to be gentler resulting in lower ANIC. Despite the influence on the process-control (SME, pressure, torque) and the structural parameters (ANIC), it needs to be emphasized that the influence on the function and quality of the hamburgers is small in application-relevant ranges. In application-relevant ranges this relationship is only slightly pronounced. Comparable results were found, as raw material variations only partially caused structural, functional, and quality effects in the hamburgers. This in turn means that changes in structure cannot always be linked to a shift in perceived quality. In order to carry out an integrated evaluation of the product, structural parameters and quality parameters must be defined, assessed separately, and merged into a combined overall sample assessment.