Search results for: ellipse fitting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 357

Search results for: ellipse fitting

87 Development of Intake System for Improvement of Performance of Compressed Natural Gas Spark Ignition Engine

Authors: Mardani Ali Serah, Yuriadi Kusuma, Chandrasa Soekardi

Abstract:

The improvement of flow strategy was implemented in the intake system of the engine to produce better Compressed Natural Gas engine performance. Three components were studied, designed, simulated, developed,tested and validated in this research. The components are: the mixer, swirl device and fuel cooler device. The three components were installed to produce pressurised turbulent flow with higher fuel volume in the intake system, which is ideal condition for Compressed Natural Gas (CNG) fuelled engine. A combination of experimental work with simulation technique were carried out. The work included design and fabrication of the engine test rig; the CNG fuel cooling system; fitting of instrumentation and measurement system for the performance testing of both gasoline and CNG modes. The simulation work was utilised to design appropriate mixer and swirl device. The flow test rig, known as the steady state flow rig (SSFR) was constructed to validate the simulation results. Then the investigation of the effect of these components on the CNG engine performance was carried out. A venturi-inlet holes mixer with three variables: number of inlet hole (8, 12, and 16); the inlet angles (300, 400, 500, and 600) and the outlet angles (200, 300, 400, and 500) were studied. The swirl-device with number of revolution and the plane angle variables were also studied. The CNG fuel cooling system with the ability to control water flow rate and the coolant temperature was installed. In this study it was found that the mixer and swirl-device improved the swirl ratio and pressure condition inside the intake manifold. The installation of the mixer, swirl device and CNG fuel cooling system had successfully increased 5.5%, 5%, and 3% of CNG engine performance respectively compared to that of existing operating condition. The overall results proved that there is a high potential of this mixer and swirl device method in increasing the CNG engine performance. The overall improvement on engine performance of power and torque was about 11% and 13% compared to the original mixer.

Keywords: intake system, Compressed Natural Gas, volumetric efficiency, engine performance

Procedia PDF Downloads 319
86 Magnetic Properties of Nickel Oxide Nanoparticles in Superparamagnetic State

Authors: Navneet Kaur, S. D. Tiwari

Abstract:

Superparamagnetism is an interesting phenomenon and observed in small particles of magnetic materials. It arises due to a reduction in particle size. In the superparamagnetic state, as the thermal energy overcomes magnetic anisotropy energy, the magnetic moment vector of particles flip their magnetization direction between states of minimum energy. Superparamagnetic nanoparticles have been attracting the researchers due to many applications such as information storage, magnetic resonance imaging, biomedical applications, and sensors. For information storage, thermal fluctuations lead to loss of data. So that nanoparticles should have high blocking temperature. And to achieve this, nanoparticles should have a higher magnetic moment and magnetic anisotropy constant. In this work, the magnetic anisotropy constant of the antiferromagnetic nanoparticles system is determined. Magnetic studies on nanoparticles of NiO (nickel oxide) are reported well. This antiferromagnetic nanoparticle system has high blocking temperature and magnetic anisotropy constant of order 105 J/m3. The magnetic study of NiO nanoparticles in the superparamagnetic region is presented. NiO particles of two different sizes, i.e., 6 and 8 nm, are synthesized using the chemical route. These particles are characterized by an x-ray diffractometer, transmission electron microscope, and superconducting quantum interference device magnetometry. The magnetization vs. applied magnetic field and temperature data for both samples confirm their superparamagnetic nature. The blocking temperature for 6 and 8 nm particles is found to be 200 and 172 K, respectively. Magnetization vs. applied magnetic field data of NiO is fitted to an appropriate magnetic expression using a non-linear least square fit method. The role of particle size distribution and magnetic anisotropy is taken in to account in magnetization expression. The source code is written in Python programming language. This fitting provides us the magnetic anisotropy constant for NiO and other magnetic fit parameters. The particle size distribution estimated matches well with the transmission electron micrograph. The value of magnetic anisotropy constants for 6 and 8 nm particles is found to be 1.42 X 105 and 1.20 X 105 J/m3, respectively. The obtained magnetic fit parameters are verified using the Neel model. It is concluded that the effect of magnetic anisotropy should not be ignored while studying the magnetization process of nanoparticles.

Keywords: anisotropy, superparamagnetic, nanoparticle, magnetization

Procedia PDF Downloads 102
85 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver

Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto

Abstract:

The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.

Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC

Procedia PDF Downloads 107
84 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna

Abstract:

Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.

Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network

Procedia PDF Downloads 122
83 The Research on Decentralization Supervision Mechanism of Town and Village Culture Based On Authenticity Evaluation

Authors: Chao Ma

Abstract:

In this paper, the evaluation criteria of authenticity evaluation system model are taken as the foundation so as to discuss the establishment problems about decentralization supervision system and mechanism of historical cultural town and village. The filtration of fitting towns and village's authenticity is conducted from the level, characteristic index and authentic assessment of evaluation model, thereby, supervising subject -interest related- coordinate organization can be taken as the venation in the management level, thus supervision mechanism of town and village's cultural inheritance can be combed, and the cultural inheritance management system and mechanism which is suitable to historical and cultural Chinese town and village will be provided. As the settlement with strong self-organizing characteristic, town and village don't recognize the management system as deeply as city. Therefore, it is necessary to establish town and village cultural evaluation system based on authenticity evaluation criteria. In this paper, authenticity evaluation system is established by taking this village's value evaluation criteria and protection as the cores, and the classification of participating options is beneficial to distribute local limited resources, protect hierarchically and accord with the local characters of town and village, build the evaluation system to run through the whole process of cultural inheritance, moreover, provide abundant information resources and make sure the value judgment criteria, thus supervision and management can be strengthened to effectively guard risk. By the above judgement and filtration of participating options, the management object with clear functions and supervision and coordination organization are established, thereby, the managerial logic of interest-related persons' decentralization can be clarified, evaluation system can be established, and the more targeted decentralization supervision system and mechanism of historical and cultural village will be built ultimately. Taking this method as a fundamental in cultural protection of town and village, not only can it be carried forward in the mass media, but also can cultivate the identity sense of indigenous people to come back historical and cultural villages, and resist the replacement of city culture.

Keywords: authenticity, rural culture, inheritance, supervision

Procedia PDF Downloads 319
82 Syntax and Words as Evolutionary Characters in Comparative Linguistics

Authors: Nancy Retzlaff, Sarah J. Berkemer, Trudie Strauss

Abstract:

In the last couple of decades, the advent of digitalization of any kind of data was probably one of the major advances in all fields of study. This paves the way for also analysing these data even though they might come from disciplines where there was no initial computational necessity to do so. Especially in linguistics, one can find a rather manual tradition. Still when considering studies that involve the history of language families it is hard to overlook the striking similarities to bioinformatics (phylogenetic) approaches. Alignments of words are such a fairly well studied example of an application of bioinformatics methods to historical linguistics. In this paper we will not only consider alignments of strings, i.e., words in this case, but also alignments of syntax trees of selected Indo-European languages. Based on initial, crude alignments, a sophisticated scoring model is trained on both letters and syntactic features. The aim is to gain a better understanding on which features in two languages are related, i.e., most likely to have the same root. Initially, all words in two languages are pre-aligned with a basic scoring model that primarily selects consonants and adjusts them before fitting in the vowels. Mixture models are subsequently used to filter ‘good’ alignments depending on the alignment length and the number of inserted gaps. Using these selected word alignments it is possible to perform tree alignments of the given syntax trees and consequently find sentences that correspond rather well to each other across languages. The syntax alignments are then filtered for meaningful scores—’good’ scores contain evolutionary information and are therefore used to train the sophisticated scoring model. Further iterations of alignments and training steps are performed until the scoring model saturates, i.e., barely changes anymore. A better evaluation of the trained scoring model and its function in containing evolutionary meaningful information will be given. An assessment of sentence alignment compared to possible phrase structure will also be provided. The method described here may have its flaws because of limited prior information. This, however, may offer a good starting point to study languages where only little prior knowledge is available and a detailed, unbiased study is needed.

Keywords: alignments, bioinformatics, comparative linguistics, historical linguistics, statistical methods

Procedia PDF Downloads 127
81 Qualitative Needs Assessment for Development of a Smart Thumb Prosthetic

Authors: Syena Moltaji, Stephanie Posa, Sander Hitzig, Amanda Mayo, Heather Baltzer

Abstract:

Purpose: To critically assess deficits following thumb amputation and delineate elements of an ideal thumb prosthesis from the end-user perspective. Methods: This was a qualitative study based on grounded theory. End-user stakeholder groups of thumb amputees and prosthetists were interviewed. Transcripts were reviewed whole first for familiarity. Data coding was then performed by two individual authors. Coded units were grouped by similarity and reviewed to reach a consensus. Codes were then analyzed for emergent themes by each author. A consensus meeting was held with all authors to finalize themes. Results: Three patients with traumatic thumb amputation and eight prosthetists were interviewed. Seven themes emerged. First was the significant impact of losing a thumb, in which codes of functional impact, mental impact, and occupational impact were included. The second theme was the unique nature of each thumb amputee, including goals, readiness for prosthesis, nature of the injury, and insurance. The third emergent theme was cost, surrounding government funding, insurability, and prosthetic pricing. The fourth theme was patient frustration, which included mismatches of prosthetic expectations and realities, activity limitations, and causes of devices abandonment. Themes five and six surrounded the strengths and weaknesses of current prosthetics, respectively. Theme seven was the ideal design for a thumb prosthetic, including abilities, suspension, and materials. Conclusions: Representative data from stakeholders mapped the current status of thumb prosthetics. Preferences for an ideal thumb prosthetic emerged, with suggestions for a simple, durable design. The ability to oppose, grasp and sense pressure was reported as functional priorities. Feasible cost and easy fitting emerged as systemic objectives. This data will be utilized in the development of a sensate thumb prosthetic.

Keywords: smart thumb, thumb prosthetic, sensate prosthetic, amputation

Procedia PDF Downloads 91
80 Perceptions and Experiences of Learners on the Banning of Corporal Punishment in South African Schools

Authors: Londeka Ngubane

Abstract:

The use of corporal punishment is not a new phenomenon in the South African education system as it was, for a long time, recognised as a fitting form of punishment for ill-disciplined and disobedient children. The growing recognition that corporal punishment is an act of violence against children has resulted in the abolishment of this form of punishment in society and particularly in schools. However, regardless of criminalising corporal punishment, it appears to be a disciplinary measure that is persistently used by some educators. Historically and currently, the intimate connection between corporal punishment and discipline has not merely been a convention of human thinking, as this practice is given recognition in various definitions in dictionaries. ‘To discipline’ is habitually stated to mean ‘to punish’. The notion of ‘disciplining children’ also comes from entrenched common conceptions about children and their relationship with adults. Corporal punishment has, for a long time, been associated with the rearing and education of children, and this practice thus pervades schooling across nations. In many societies, punishment is a term that is closely linked with the self-perception of teachers who feel that they must be ‘in control’ and have ‘the upper hand’ in order to be respected. This impression of control is evident in the widespread conception of education which is to ‘socialize’ children in ‘desirable ways’ of ‘sitting in a formal classroom’, ‘behaving’ in school, ‘following instructions’ from the teacher, talking only when asked to, and finishing tasks on time. It was against this backdrop that a comprehensive review of relevant literature was undertaken and that individual interviews were conducted with fifty learners from four schools (two junior secondary and two senior secondary schools) in a selected township area in KwaZulu-Natal Province. The main aim of the study was to explore and thus understand learners’ views on the administration of corporal punishment regardless of the fact that it was legally abolished. It was envisaged that the interviews with the learners would elicit rich data that would enhance the researcher’s insight into their perceptions of the persistent use of corporal punishment as a disciplinary measure in their schools. The study was thus premised on the assumption, which had been strengthened by anecdotal and media evidence, that corporal punishment was still administered in some schools in South Africa and in schools in the study area in particular.

Keywords: corporal punishment, ban, school learners, South Africa

Procedia PDF Downloads 124
79 Measurement Technologies for Advanced Characterization of Magnetic Materials Used in Electric Drives and Automotive Applications

Authors: Lukasz Mierczak, Patrick Denke, Piotr Klimczyk, Stefan Siebert

Abstract:

Due to the high complexity of the magnetization in electrical machines and influence of the manufacturing processes on the magnetic properties of their components, the assessment and prediction of hysteresis and eddy current losses has remained a challenge. In the design process of electric motors and generators, the power losses of stators and rotors are calculated based on the material supplier’s data from standard magnetic measurements. This type of data does not include the additional loss from non-sinusoidal multi-harmonic motor excitation nor the detrimental effects of residual stress remaining in the motor laminations after manufacturing processes, such as punching, housing shrink fitting and winding. Moreover, in production, considerable attention is given to the measurements of mechanical dimensions of stator and rotor cores, whereas verification of their magnetic properties is typically neglected, which can lead to inconsistent efficiency of assembled motors. Therefore, to enable a comprehensive characterization of motor materials and components, Brockhaus Measurements developed a range of in-line and offline measurement technologies for testing their magnetic properties under actual motor operating conditions. Multiple sets of experimental data were obtained to evaluate the influence of various factors, such as elevated temperature, applied and residual stress, and arbitrary magnetization on the magnetic properties of different grades of non-oriented steel. Measured power loss for tested samples and stator cores varied significantly, by more than 100%, comparing to standard measurement conditions. Quantitative effects of each of the applied measurement were analyzed. This research and applied Brockhaus measurement methodologies emphasized the requirement for advanced characterization of magnetic materials used in electric drives and automotive applications.

Keywords: magnetic materials, measurement technologies, permanent magnets, stator and rotor cores

Procedia PDF Downloads 122
78 An Investigation of Allied Health and Medical Clinician’s Viewpoint on Prosthetic Rehabilitation and Cognition

Authors: Erinn Dawes, Vida Bliokas, Lyndel Hewitt, Val Wilson

Abstract:

Background: In a population where adapting to new devices is often necessary post-surgery, this can pose significant challenges. This study aimed to explore the factors that influence clinicians (occupational therapists, physiotherapists, vascular surgeons, and rehabilitation medicine physicians) when prescribing prosthetic rehabilitation. Additionally, the study aimed to gain insight into clinicians’ perspectives regarding the role of patient cognition in prosthetic rehabilitation. Method: This research constitutes one segment of a broader action research study. A combination of group and individual interviews, as well as surveys to engage key clinicians who are involved in the amputation and prosthetic rehabilitation pathway within a local health district in Australia, were conducted. Major findings: Several factors emerged as essential considerations when prescribing prosthetic rehabilitation. These included patient’s goals, medical history, quality of life, cognitive abilities, and the support available on discharge. Cognition has a far-reaching impact on prosthetic rehabilitation and should be considered at every stage of the amputation journey, from obtaining pre-operative consent to fitting prosthetics, ensuring patient safety upon discharge, and ongoing rehabilitation. This study also revealed variations in opinions among different disciplines concerning prosthetic rehabilitation. The biggest variance was seen between the opinions of vascular surgeons and those in allied health on the appropriateness of prosthetic prescription with patients, with vascular surgery believing most should not receive prosthetics and allied health believing that most should have an attempt with a prosthetic. Conclusion: The complex area of care and journey for clinicians has been made much more approachable by the identification of key areas for consideration when prescribing prosthetic rehabilitation. Should clinicians wish, these could be made into a framework to guide pertinent conversations regarding prosthetic rehabilitation and are closely linked with the patients' cognition. Whilst discipline specific differences existed on prosthetic rehabilitation appropriateness, there was a desire to build a consensus around a shared approach of identification for patients and clinicians.

Keywords: aging, cognition, multidisciplinary, prosthetic rehabilitation

Procedia PDF Downloads 36
77 An Investigation into the Crystallization Tendency/Kinetics of Amorphous Active Pharmaceutical Ingredients: A Case Study with Dipyridamole and Cinnarizine

Authors: Shrawan Baghel, Helen Cathcart, Biall J. O'Reilly

Abstract:

Amorphous drug formulations have great potential to enhance solubility and thus bioavailability of BCS class II drugs. However, the higher free energy and molecular mobility of the amorphous form lowers the activation energy barrier for crystallization and thermodynamically drives it towards the crystalline state which makes them unstable. Accurate determination of the crystallization tendency/kinetics is the key to the successful design and development of such systems. In this study, dipyridamole (DPM) and cinnarizine (CNZ) has been selected as model compounds. Thermodynamic fragility (m_T) is measured from the heat capacity change at the glass transition temperature (Tg) whereas dynamic fragility (m_D) is evaluated using methods based on extrapolation of configurational entropy to zero 〖(m〗_(D_CE )), and heating rate dependence of Tg 〖(m〗_(D_Tg)). The mean relaxation time of amorphous drugs was calculated from Vogel-Tammann-Fulcher (VTF) equation. Furthermore, the correlation between fragility and glass forming ability (GFA) of model drugs has been established and the relevance of these parameters to crystallization of amorphous drugs is also assessed. Moreover, the crystallization kinetics of model drugs under isothermal conditions has been studied using Johnson-Mehl-Avrami (JMA) approach to determine the Avrami constant ‘n’ which provides an insight into the mechanism of crystallization. To further probe into the crystallization mechanism, the non-isothermal crystallization kinetics of model systems was also analysed by statistically fitting the crystallization data to 15 different kinetic models and the relevance of model-free kinetic approach has been established. In addition, the crystallization mechanism for DPM and CNZ at each extent of transformation has been predicted. The calculated fragility, glass forming ability (GFA) and crystallization kinetics is found to be in good correlation with the stability prediction of amorphous solid dispersions. Thus, this research work involves a multidisciplinary approach to establish fragility, GFA and crystallization kinetics as stability predictors for amorphous drug formulations.

Keywords: amorphous, fragility, glass forming ability, molecular mobility, mean relaxation time, crystallization kinetics, stability

Procedia PDF Downloads 323
76 Role of Web Graphics and Interface in Creating Visitor Trust

Authors: Pramika J. Muthya

Abstract:

This paper investigates the impact of web graphics and interface design on building visitor trust in websites. A quantitative survey approach was used to examine how aesthetic and usability elements of website design influence user perceptions of trustworthiness. 133 participants aged 18-25 who live in urban Bangalore and engage in online transactions were recruited via convenience sampling. Data was collected through an online survey measuring trust levels based on website design, using validated constructs like the Visual Aesthetic of Websites Inventory (VisAWI). Statistical analysis, including ordinal regression, was conducted to analyze the results. The findings show a statistically significant relationship between web graphics and interface design and the level of trust visitors place in a website. The goodness-of-fit statistics and highly significant model fitting information provide strong evidence for rejecting the null hypothesis of no relationship. Well-designed visual aesthetics like simplicity, diversity, colorfulness, and craftsmanship are key drivers of perceived credibility. Intuitive navigation and usability also increase trust. The results emphasize the strategic importance for companies to invest in appealing graphic design, consistent with existing theoretical frameworks. There are also implications for taking a user-centric approach to web design and acknowledging the reciprocal link between pre-existing user trust and perception of visuals. While generalizable, limitations include possible sampling and self-report biases. Further research can build on these findings to deepen understanding of nuanced cultural and temporal factors influencing online trust. Overall, this study makes a significant contribution by providing empirical evidence that reinforces the crucial impact of thoughtful graphic design in fostering lasting user trust in websites.

Keywords: web graphics, interface design, visitor trust, website design, aesthetics, user experience, online trust, visual design, graphic design, user perceptions, user expectations

Procedia PDF Downloads 23
75 Corridor Densification Option as a Means for Restructuring South African Cities

Authors: T. J. B. van Niekerk, J. Viviers, E. J. Cilliers

Abstract:

Substantial efforts were made in South Africa, stemming from a historic political change in 1994, to remedy the inequality and injustice, resulting from a dispensation where spatial patterns were largely based on racial segregation. Spatially distorted patterns predominantly originated from colonialism in the beginning of the twentieth century, ensuing a physical imprint on South African cities relating to architecture, urban layout and planning, frequently reflecting European norms and standards. As a consequence of physical and land use barriers, and well-established dual cities, attempts to address spatial injustices, apart from limited occurrences in metropolitan areas, gravely failed. Interception of incessant segregated growth, combined with urban sprawl is becoming increasingly evident. Intervention is a prerequisite to duly address the impact of colonial planning and its legacy still prevalent in most urban areas. During 1998, the National Department of Transport prepared the “Moving South Africa” strategy; presenting the Corridor Densification Option Model for the first time, as it was deemed more fitting to the existing South African urban tenure patterns than more familiar planning approaches. Urban planners are progressively contemplating the Corridor Densification Option Model and its attributes, besides its transportation emphasis, as an alternative approach to address spatial imbalances and to attain the physical integration of contemporary urban forms. In attaining a clearer understanding of the Corridor Densification Option Model, its rationale was analysed in greater detail. This research further investigated the provisional applications of the model in spatially segregated cities and illustrated that viable options are present to effectively employ it. Research revealed that the application of the model will, however, be dependent on the occurrence of specific characteristics in spatially segregated cities to warrant augmentation thereof.

Keywords: corridor densification option model, spatially segregated settlements, integration, urban restructuring

Procedia PDF Downloads 189
74 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations

Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.

Abstract:

Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.

Keywords: gamma incomplete, ewes, shape curves, modeling

Procedia PDF Downloads 41
73 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 327
72 Methods for Material and Process Monitoring by Characterization of (Second and Third Order) Elastic Properties with Lamb Waves

Authors: R. Meier, M. Pander

Abstract:

In accordance with the industry 4.0 concept, manufacturing process steps as well as the materials themselves are going to be more and more digitalized within the next years. The “digital twin” representing the simulated and measured dataset of the (semi-finished) product can be used to control and optimize the individual processing steps and help to reduce costs and expenditure of time in product development, manufacturing, and recycling. In the present work, two material characterization methods based on Lamb waves were evaluated and compared. For demonstration purpose, both methods were shown at a standard industrial product - copper ribbons, often used in photovoltaic modules as well as in high-current microelectronic devices. By numerical approximation of the Rayleigh-Lamb dispersion model on measured phase velocities second order elastic constants (Young’s modulus, Poisson’s ratio) were determined. Furthermore, the effective third order elastic constants were evaluated by applying elastic, “non-destructive”, mechanical stress on the samples. In this way, small microstructural variations due to mechanical preconditioning could be detected for the first time. Both methods were compared with respect to precision and inline application capabilities. Microstructure of the samples was systematically varied by mechanical loading and annealing. Changes in the elastic ultrasound transport properties were correlated with results from microstructural analysis and mechanical testing. In summary, monitoring the elastic material properties of plate-like structures using Lamb waves is valuable for inline and non-destructive material characterization and manufacturing process control. Second order elastic constants analysis is robust over wide environmental and sample conditions, whereas the effective third order elastic constants highly increase the sensitivity with respect to small microstructural changes. Both Lamb wave based characterization methods are fitting perfectly into the industry 4.0 concept.

Keywords: lamb waves, industry 4.0, process control, elasticity, acoustoelasticity, microstructure

Procedia PDF Downloads 201
71 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter

Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai

Abstract:

Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.

Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking

Procedia PDF Downloads 454
70 Design and Developing the Infrared Sensor for Detection and Measuring Mass Flow Rate in Seed Drills

Authors: Bahram Besharti, Hossein Navid, Hadi Karimi, Hossein Behfar, Iraj Eskandari

Abstract:

Multiple or miss sowing by seed drills is a common problem on the farm. This problem causes overuse of seeds, wasting energy, rising crop treatment cost and reducing crop yield in harvesting. To be informed of mentioned faults and monitoring the performance of seed drills during sowing, developing a seed sensor for detecting seed mass flow rate and monitoring in a delivery tube is essential. In this research, an infrared seed sensor was developed to estimate seed mass flow rate in seed drills. The developed sensor comprised of a pair of spaced apart circuits one acting as an IR transmitter and the other acting as an IR receiver. Optical coverage in the sensing section was obtained by setting IR LEDs and photo-diodes directly on opposite sides. Passing seeds made interruption in radiation beams to the photo-diode which caused output voltages to change. The voltage difference of sensing units summed by a microcontroller and were converted to an analog value by DAC chip. The sensor was tested by using a roller seed metering device with three types of seeds consist of chickpea, wheat, and alfalfa (representing large, medium and fine seed, respectively). The results revealed a good fitting between voltage received from seed sensor and mass flow of seeds in the delivery tube. A linear trend line was set for three seeds collected data as a model of the mass flow of seeds. A final mass flow model was developed for various size seeds based on receiving voltages from the seed sensor, thousand seed weight and equivalent diameter of seeds. The developed infrared seed sensor, besides monitoring mass flow of seeds in field operations, can be used for the assessment of mechanical planter seed metering unit performance in the laboratory and provide an easy calibrating method for seed drills before planting in the field.

Keywords: seed flow, infrared, seed sensor, seed drills

Procedia PDF Downloads 332
69 Development of Micelle-Mediated Sr(II) Fluorescent Analysis System

Authors: K. Akutsu, S. Mori, T. Hanashima

Abstract:

Fluorescent probes are useful for the selective detection of trace amount of ions and biomolecular imaging in living cells. Various kinds of metal ion-selective fluorescent compounds have been developed, and some compounds have been applied as effective metal ion-selective fluorescent probes. However, because competition between the ligand and water molecules for the metal ion constitutes a major contribution to the stability of a complex in aqueous solution, it is difficult to develop a highly sensitive, selective, and stable fluorescent probe in aqueous solution. The micelles, these are formed in the surfactant aqueous solution, provides a unique hydrophobic nano-environment for stabilizing metal-organic complexes in aqueous solution. Therefore, we focused on the unique properties of micelles to develop a new fluorescence analysis system. We have been developed a fluorescence analysis system for Sr(II) by using a Sr(II) fluorescent sensor, N-(2-hydroxy-3-(1H-benzimidazol-2-yl)-phenyl)-1-aza-18-crown-6-ether (BIC), and studied its complexation behavior with Sr(II) in micellar solution. We revealed that the stability constant of Sr(II)-BIC complex was 10 times higher than that in aqueous solution. In addition, its detection limit value was also improved up to 300 times by this system. However, the mechanisms of these phenomena have remained obscure. In this study, we investigated the structure of Sr(II)-BIC complex in aqueous micellar solution by combining use the extended X-ray absorption fine structure (EXAFS) and neutron reflectivity (NR) method to understand the unique properties of the fluorescence analysis system from the view point of structural chemistry. EXAFS and NR experiments were performed on BL-27B at KEK-PF and on BL17 SHARAKU at J-PARC MLF, respectively. The obtained EXAFS spectra and their fitting results indicated that Sr(II) and BIC formed a Sr(18-crown-6-ether)-like complex in aqueous micellar solution. The EXAFS results also indicated that the hydrophilic head group of surfactant molecule was directly coordinated with Sr(II). In addition, the NR results also indicated that Sr(II)-BIC complex would interact with the surface of micelle molecules. Therefore, we concluded that Sr(II), BIC, and surfactant molecule formed a ternary complexes in aqueous micellar solution, and at least, it is clear that the improvement of the stability constant in micellar solution is attributed to the result of the formation of Sr(BIC)(surfactant) complex.

Keywords: micell, fluorescent probe, neutron reflectivity, EXAFS

Procedia PDF Downloads 158
68 Study of Climate Change Process on Hyrcanian Forests Using Dendroclimatology Indicators (Case Study of Guilan Province)

Authors: Farzad Shirzad, Bohlol Alijani, Mehry Akbary, Mohammad Saligheh

Abstract:

Climate change and global warming are very important issues today. The process of climate change, especially changes in temperature and precipitation, is the most important issue in the environmental sciences. Climate change means changing the averages in the long run. Iran is located in arid and semi-arid regions due to its proximity to the equator and its location in the subtropical high pressure zone. In this respect, the Hyrcanian forest is a green necklace between the Caspian Sea and the south of the Alborz mountain range. In the forty-third session of UNESCO, it was registered as the second natural heritage of Iran. Beech is one of the most important tree species and the most industrial species of Hyrcanian forests. In this research, using dendroclimatology, the width of the tree ring, and climatic data of temperature and precipitation from Shanderman meteorological station located in the study area, And non-parametric Mann-Kendall statistical method to investigate the trend of climate change over a time series of 202 years of growth ringsAnd Pearson statistical method was used to correlate the growth of "ring" growth rings of beech trees with climatic variables in the region. The results obtained from the time series of beech growth rings showed that the changes in beech growth rings had a downward and negative trend and were significant at the level of 5% and climate change occurred. The average minimum, medium, and maximum temperatures and evaporation in the growing season had an increasing trend, and the annual precipitation had a decreasing trend. Using Pearson method during fitting the correlation of diameter of growth rings with temperature, for the average in July, August, and September, the correlation is negative, and the average temperature in July, August, and September is negative, and for the average The average maximum temperature in February was correlation-positive and at the level of 95% was significant, and with precipitation, in June the correlation was at the level of 95% positive and significant.

Keywords: climate change, dendroclimatology, hyrcanian forest, beech

Procedia PDF Downloads 75
67 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures

Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.

Abstract:

Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.

Keywords: CFD, reacting flow, DDT, gas explosion

Procedia PDF Downloads 58
66 Fast Transient Workflow for External Automotive Aerodynamic Simulations

Authors: Christina Peristeri, Tobias Berg, Domenico Caridi, Paul Hutcheson, Robert Winstanley

Abstract:

In recent years the demand for rapid innovations in the automotive industry has led to the need for accelerated simulation procedures while retaining a detailed representation of the simulated phenomena. The project’s aim is to create a fast transient workflow for external aerodynamic CFD simulations of road vehicles. The geometry used was the SAE Notchback Closed Cooling DrivAer model, and the simulation results were compared with data from wind tunnel tests. The meshes generated for this study were of two types. One was a mix of polyhedral cells near the surface and hexahedral cells away from the surface. The other was an octree hex mesh with a rapid method of fitting to the surface. Three different grid refinement levels were used for each mesh type, with the biggest total cell count for the octree mesh being close to 1 billion. A series of steady-state solutions were obtained on three different grid levels using a pseudo-transient coupled solver and a k-omega-based RANS turbulence model. A mesh-independent solution was found in all cases with a medium level of refinement with 200 million cells. Stress-Blended Eddy Simulation (SBES) was chosen for the transient simulations, which uses a shielding function to explicitly switch between RANS and LES mode. A converged pseudo-transient steady-state solution was used to initialize the transient SBES run that was set up with the SIMPLEC pressure-velocity coupling scheme to reach the fastest solution (on both CPU & GPU solvers). An important part of this project was the use of FLUENT’s Multi-GPU solver. Tesla A100 GPU has been shown to be 8x faster than an Intel 48-core Sky Lake CPU system, leading to significant simulation speed-up compared to the traditional CPU solver. The current study used 4 Tesla A100 GPUs and 192 CPU cores. The combination of rapid octree meshing and GPU computing shows significant promise in reducing time and hardware costs for industrial strength aerodynamic simulations.

Keywords: CFD, DrivAer, LES, Multi-GPU solver, octree mesh, RANS

Procedia PDF Downloads 85
65 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 9
64 Repeatable Surface Enhanced Raman Spectroscopy Substrates from SERSitive for Wide Range of Chemical and Biological Substances

Authors: Monika Ksiezopolska-Gocalska, Pawel Albrycht, Robert Holyst

Abstract:

Surface Enhanced Raman Spectroscopy (SERS) is a technique used to analyze very low concentrations of substances in solutions, even in aqueous solutions - which is its advantage over IR. This technique can be used in the pharmacy (to check the purity of products); forensics (whether at a crime scene there were any illegal substances); or medicine (serving as a medical test) and lots more. Due to the high potential of this technique, its increasing popularity in analytical laboratories, and simultaneously - the absence of appropriate platforms enhancing the SERS signal (crucial to observe the Raman effect at low analyte concentration in solutions (1 ppm)), we decided to invent our own SERS platforms. As an enhancing layer, we have chosen gold and silver nanoparticles, because these two have the best SERS properties, and each has an affinity for the other kind of particles, which increases the range of research capabilities. The next step was to commercialize them, which resulted in the creation of the company ‘SERSitive.eu’ focusing on production of highly sensitive (Ef = 10⁵ – 10⁶), homogeneous and reproducible (70 - 80%) substrates. SERStive SERS substrates are made using the electrodeposition of silver or silver-gold nanoparticles technique. Thanks to a very detailed analysis of data based on studies optimizing such parameters as deposition time, temperature of the reaction solution, applied potential, used reducer, or reagent concentrations using a standardized compound - p-mercaptobenzoic acid (PMBA) at a concentration of 10⁻⁶ M, we have developed a high-performance process for depositing precious metal nanoparticles on the surface of ITO glass. In order to check a quality of the SERSitive platforms, we examined the wide range of the chemical compounds and the biological substances. Apart from analytes that have great affinity to the metal surfaces (e.g. PMBA) we obtained very good results for those fitting less the SERS measurements. Successfully we received intensive, and what’s more important - very repetitive spectra for; amino acids (phenyloalanine, 10⁻³ M), drugs (amphetamine, 10⁻⁴ M), designer drugs (cathinone derivatives, 10⁻³ M), medicines and ending with bacteria (Listeria, Salmonella, Escherichia coli) and fungi.

Keywords: nanoparticles, Raman spectroscopy, SERS, SERS applications, SERS substrates, SERSitive

Procedia PDF Downloads 126
63 Parameter Fitting of the Discrete Element Method When Modeling the DISAMATIC Process

Authors: E. Hovad, J. H. Walther, P. Larsen, J. Thorborg, J. H. Hattel

Abstract:

In sand casting of metal parts for the automotive industry such as brake disks and engine blocks, the molten metal is poured into a sand mold to get its final shape. The DISAMATIC molding process is a way to construct these sand molds for casting of steel parts and in the present work numerical simulations of this process are presented. During the process green sand is blown into a chamber and subsequently squeezed to finally obtain the sand mould. The sand flow is modelled with the Discrete Element method (DEM) and obtaining the correct material parameters for the simulation is the main goal. Different tests will be used to find or calibrate the DEM parameters needed; Poisson ratio, Young modulus, rolling friction coefficient, sliding friction coefficient and coefficient of restitution (COR). The Young modulus and Poisson ratio are found from compression tests of the bulk material and subsequently used in the DEM model according to the Hertz-Mindlin model. The main focus will be on calibrating the rolling resistance and sliding friction in the DEM model with respect to the behavior of “real” sand piles. More specifically, the surface profile of the “real” sand pile will be compared to the sand pile predicted with the DEM for different values of the rolling and sliding friction coefficients. When the DEM parameters are found for the particle-particle (sand-sand) interaction, the particle-wall interaction parameter values are also found. Here the sliding coefficient will be found from experiments and the rolling resistance is investigated by comparing with observations of how the green sand interacts with the chamber wall during experiments and the DEM simulations will be calibrated accordingly. The coefficient of restitution will be tested with different values in the DEM simulations and compared to video footages of the DISAMATIC process. Energy dissipation will be investigated in these simulations for different particle sizes and coefficient of restitution, where scaling laws will be considered to relate the energy dissipation for these parameters. Finally, the found parameter values are used in the overall discrete element model and compared to the video footage of the DISAMATIC process.

Keywords: discrete element method, physical properties of materials, calibration, granular flow

Procedia PDF Downloads 457
62 Estimates of (Co)Variance Components and Genetic Parameters for Body Weights and Growth Efficiency Traits in the New Zealand White Rabbits

Authors: M. Sakthivel, A. Devaki, D. Balasubramanyam, P. Kumarasamy, A. Raja, R. Anilkumar, H. Gopi

Abstract:

The genetic parameters of growth traits in the New Zealand White rabbits maintained at Sheep Breeding and Research Station, Sandynallah, The Nilgiris, India were estimated by partitioning the variance and covariance components. The (co)variance components of body weights at weaning (W42), post-weaning (W70) and marketing (W135) age and growth efficiency traits viz., average daily gain (ADG), relative growth rate (RGR) and Kleiber ratio (KR) estimated on a daily basis at different age intervals (1=42 to 70 days; 2=70 to 135 days and 3=42 to 135 days) from weaning to marketing were estimated by restricted maximum likelihood, fitting six animal models with various combinations of direct and maternal effects. Data were collected over a period of 15 years (1998 to 2012). A log-likelihood ratio test was used to select the most appropriate univariate model for each trait, which was subsequently used in bivariate analysis. Heritability estimates for W42, W70 and W135 were 0.42 ± 0.07, 0.40 ± 0.08 and 0.27 ± 0.07, respectively. Heritability estimates of growth efficiency traits were moderate to high (0.18 to 0.42). Of the total phenotypic variation, maternal genetic effect contributed 14 to 32% for early body weight traits (W42 and W70) and ADG1. The contribution of maternal permanent environmental effect varied from 6 to 18% for W42 and for all the growth efficiency traits except for KR2. Maternal permanent environmental effect on most of the growth efficiency traits was a carryover effect of maternal care during weaning. Direct maternal genetic correlations, for the traits in which maternal genetic effect was significant, were moderate to high in magnitude and negative in direction. Maternal effect declined as the age of the animal increased. The estimates of total heritability and maternal across year repeatability for growth traits were moderate and an optimum rate of genetic progress seems possible in the herd by mass selection. The estimates of genetic and phenotypic correlations among body weight traits were moderate to high and positive; among growth efficiency traits were low to high with varying directions; between body weights and growth efficiency traits were very low to high in magnitude and mostly negative in direction. Moderate to high heritability and higher genetic correlation in body weight traits promise good scope for genetic improvement provided measures are taken to keep the inbreeding at the lowest level.

Keywords: genetic parameters, growth traits, maternal effects, rabbit genetics

Procedia PDF Downloads 428
61 Modeling the Impact of Aquaculture in Wetland Ecosystems Using an Integrated Ecosystem Approach: Case Study of Setiu Wetlands, Malaysia

Authors: Roseliza Mat Alipiah, David Raffaelli, J. C. R. Smart

Abstract:

This research is a new approach as it integrates information from both environmental and social sciences to inform effective management of the wetlands. A three-stage research framework was developed for modelling the drivers and pressures imposed on the wetlands and their impacts to the ecosystem and the local communities. Firstly, a Bayesian Belief Network (BBN) was used to predict the probability of anthropogenic activities affecting the delivery of different key wetland ecosystem services under different management scenarios. Secondly, Choice Experiments (CEs) were used to quantify the relative preferences which key wetland stakeholder group (aquaculturists) held for delivery of different levels of these key ecosystem services. Thirdly, a Multi-Criteria Decision Analysis (MCDA) was applied to produce an ordinal ranking of the alternative management scenarios accounting for their impacts upon ecosystem service delivery as perceived through the preferences of the aquaculturists. This integrated ecosystem management approach was applied to a wetland ecosystem in Setiu, Terengganu, Malaysia which currently supports a significant level of aquaculture activities. This research has produced clear guidelines to inform policy makers considering alternative wetland management scenarios: Intensive Aquaculture, Conservation or Ecotourism, in addition to the Status Quo. The findings of this research are as follows: The BBN revealed that current aquaculture activity is likely to have significant impacts on water column nutrient enrichment, but trivial impacts on caged fish biomass, especially under the Intensive Aquaculture scenario. Secondly, the best fitting CE models identified several stakeholder sub-groups for aquaculturists, each with distinct sets of preferences for the delivery of key ecosystem services. Thirdly, the MCDA identified Conservation as the most desirable scenario overall based on ordinal ranking in the eyes of most of the stakeholder sub-groups. Ecotourism and Status Quo scenarios were the next most preferred and Intensive Aquaculture was the least desirable scenario. The methodologies developed through this research provide an opportunity for improving planning and decision making processes that aim to deliver sustainable management of wetland ecosystems in Malaysia.

Keywords: Bayesian belief network (BBN), choice experiments (CE), multi-criteria decision analysis (MCDA), aquaculture

Procedia PDF Downloads 264
60 A Differential Scanning Calorimetric Study of Frozen Liquid Egg Yolk Thawed by Different Thawing Methods

Authors: Karina I. Hidas, Csaba Németh, Anna Visy, Judit Csonka, László Friedrich, Ildikó Cs. Nyulas-Zeke

Abstract:

Egg yolk is a popular ingredient in the food industry due to its gelling, emulsifying, colouring, and coagulating properties. Because of the heat sensitivity of proteins, egg yolk can only be heat treated at low temperatures, so its shelf life, even with the addition of a preservative, is only a few weeks. Freezing can increase the shelf life of liquid egg yolk up to 1 year, but it undergoes gelling below -6 ° C, which is an irreversible phenomenon. The degree of gelation depends on the time and temperature of freezing and is influenced by the process of thawing. Therefore, in our experiment, we examined egg yolks thawed in different ways. In this study, unpasteurized, industrially broken, separated, and homogenized liquid egg yolk was used. Freshly produced samples were frozen in plastic containers at -18°C in a laboratory freezer. Frozen storage was performed for 90 days. Samples were analysed at day zero (unfrozen) and after frozen storage for 1, 7, 14, 30, 60 and 90 days. Samples were thawed in two ways (at 5°C for 24 hours and 30°C for 3 hours) before testing. Calorimetric properties were examined by differential scanning calorimetry, where heat flow curves were recorded. Denaturation enthalpy values were calculated by fitting a linear baseline, and denaturation temperature values were evaluated. Besides, dry matter content of samples was measured by the oven method with drying at 105°C to constant weight. For statistical analysis two-way ANOVA (α = 0.05) was employed, where thawing mode and freezing time were the fixed factors. Denaturation enthalpy values decreased from 1.1 to 0.47 at the end of the storage experiment, which represents a reduction of about 60%. The effect of freezing time was significant on these values, already the enthalpy of samples stored frozen for 1 day was significantly reduced. However, the mode of thawing did not significantly affect the denaturation enthalpy of the samples, and no interaction was seen between the two factors. The denaturation temperature and dry matter content did not change significantly either during the freezing period or during the defrosting mode. Results of our study show that slow freezing and frozen storage at -18°C greatly reduces the amount of protein that can be denatured in egg yolk, indicating that the proteins have been subjected to aggregation, denaturation or other protein conversions regardless of how they were thawed.

Keywords: denaturation enthalpy, differential scanning calorimetry, liquid egg yolk, slow freezing

Procedia PDF Downloads 99
59 A Versatile Standing Cum Sitting Device for Rehabilitation and Standing Aid for Paraplegic Patients

Authors: Sasibhushan Yengala, Nelson Muthu, Subramani Kanagaraj

Abstract:

The abstract reports on the design related to a modular and affordable standing cum sitting device to meet the requirements of paraplegic patients of the different physiques. Paraplegic patients need the assistance of an external arrangement to the lower limbs and trunk to help patients adopt the correct posture while standing abreast gravity. This support can be from a tilt table or a standing frame which the patient can use to stay in a vertical posture. Standing frames are devices fitting to support a person in a weight-bearing posture. Commonly, these devices support and lift the end-user in shifting from a sitting position to a standing position. The merits of standing for a paraplegic patient with a spinal injury are numerous. Even when there is limited control on muscles that ordinarily support the user using the standing frame in a vertical position, the standing stance improves the blood pressure, increases bone density, improves resilience and scope of motion, and improves the user's feelings of well-being by letting the patient stand. One limitation with standing frames is that these devices are typically function definitely; cannot be used for different purposes. Therefore, users are often compelled to purchase more than one of these devices, each being purposefully built for definite activities. Another concern frequent in standing frames is manoeuvrability; it is crucial to provide a convenient adjustment scope for all users. Thus, there is a need to provide a standing frame with multiple uses that can be economical for a larger population. There is also a need to equip added readjustment means in a standing frame to lessen the shear and to accommodate a broad range of users. The proposed Versatile Standing cum Sitting Device (VSD) is designed to change from standing to a comfortable sitting position using a series of mechanisms. First, a locking mechanism is provided to lock the VSD in a standing stance. Second, a dampening mechanism is provided to make sure that the VSD shifts from a standing to a sitting position gradually when the lock mechanism gets disengaged. An adjustment option is offered for the height of the headrest via the use of lock knobs. This device can be used in clinics for rehabilitation purposes irrespective of patient's anthropometric data due to its modular adjustments. It can facilitate the patient's daily life routine while in therapy and giving the patient the comfort to sit when tired. The device also provides the availability of rehabilitation to a common person.

Keywords: paraplegic, rehabilitation, spinal cord injury, standing frame

Procedia PDF Downloads 175
58 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 259