Search results for: toxicity testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3976

Search results for: toxicity testing

286 Alternative Fuel Production from Sewage Sludge

Authors: Jaroslav Knapek, Kamila Vavrova, Tomas Kralik, Tereza Humesova

Abstract:

The treatment and disposal of sewage sludge is one of the most important and critical problems of waste water treatment plants. Currently, 180 thousand tonnes of sludge dry matter are produced in the Czech Republic, which corresponds to approximately 17.8 kg of stabilized sludge dry matter / year per inhabitant of the Czech Republic. Due to the fact that sewage sludge contains a large amount of substances that are not beneficial for human health, the conditions for sludge management will be significantly tightened in the Czech Republic since 2023. One of the tested methods of sludge liquidation is the production of alternative fuel from sludge from sewage treatment plants and paper production. The paper presents an analysis of economic efficiency of alternative fuel production from sludge and its use for fluidized bed boiler with nominal consumption of 5 t of fuel per hour. The evaluation methodology includes the entire logistics chain from sludge extraction, through mechanical moisture reduction to about 40%, transport to the pelletizing line, moisture drying for pelleting and pelleting itself. For economic analysis of sludge pellet production, a time horizon of 10 years corresponding to the expected lifetime of the critical components of the pelletizing line is chosen. The economic analysis of pelleting projects is based on a detailed analysis of reference pelleting technologies suitable for sludge pelleting. The analysis of the economic efficiency of pellet is based on the simulation of cash flows associated with the implementation of the project over the life of the project. For the entered value of return on the invested capital, the price of the resulting product (in EUR / GJ or in EUR / t) is searched to ensure that the net present value of the project is zero over the project lifetime. The investor then realizes the return on the investment in the amount of the discount used to calculate the net present value. The calculations take place in a real business environment (taxes, tax depreciation, inflation, etc.) and the inputs work with market prices. At the same time, the opportunity cost principle is respected; waste disposal for alternative fuels includes the saved costs of waste disposal. The methodology also respects the emission allowances saved due to the displacement of coal by alternative (bio) fuel. Preliminary results of testing of pellet production from sludge show that after suitable modifications of the pelletizer it is possible to produce sufficiently high quality pellets from sludge. A mixture of sludge and paper waste has proved to be a more suitable material for pelleting. At the same time, preliminary results of the analysis of the economic efficiency of this sludge disposal method show that, despite the relatively low calorific value of the fuel produced (about 10-11 MJ / kg), this sludge disposal method is economically competitive. This work has been supported by the Czech Technology Agency within the project TN01000048 Biorefining as circulation technology.

Keywords: Alternative fuel, Economic analysis, Pelleting, Sewage sludge

Procedia PDF Downloads 139
285 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis

Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero

Abstract:

Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.

Keywords: chemometrics, microNIR, microplastics, urban plastic waste

Procedia PDF Downloads 167
284 Use of Progressive Feedback for Improving Team Skills and Fair Marking of Group Tasks

Authors: Shaleeza Sohail

Abstract:

Self, and peer evaluations are some of the main components in almost all group assignments and projects in higher education institutes. These evaluations provide students an opportunity to better understand the learning outcomes of the assignment and/or project. A number of online systems have been developed for this purpose that provides automated assessment and feedback of students’ contribution in a group environment based on self and peer evaluations. All these systems lack a progressive aspect of these assessments and feedbacks which is the most crucial factor for ongoing improvement and life-long learning. In addition, a number of assignments and projects are designed in a manner that smaller or initial assessment components lead to a final assignment or project. In such cases, the evaluation and feedback may provide students an insight into their performance as a group member for a particular component after the submission. Ideally, it should also create an opportunity to improve for next assessment component as well. Self and Peer Progressive Assessment and Feedback System encourages students to perform better in the next assessment by providing a comparative analysis of the individual’s contribution score on an ongoing basis. Hence, the student sees the change in their own contribution scores during the complete project based on smaller assessment components. Self-Assessment Factor is calculated as an indicator of how close the self-perception of the student’s own contribution is to the perceived contribution of that student by other members of the group. Peer-Assessment Factor is calculated to compare the perception of one student’s contribution as compared to the average value of the group. Our system also provides a Group Coherence Factor which shows collectively how group members contribute to the final submission. This feedback is provided for students and teachers to visualize the consistency of members’ contribution perceived by its group members. Teachers can use these factors to judge the individual contributions of the group members in the combined tasks and allocate marks/grades accordingly. This factor is shown to students for all groups undertaking same assessment, so the group members can comparatively analyze the efficiency of their group as compared to other groups. Our System provides flexibility to the instructors for generating their own customized criteria for self and peer evaluations based on the requirements of the assignment. Students evaluate their own and other group members’ contributions on the scale from significantly higher to significantly lower. The preliminary testing of the prototype system is done with a set of predefined cases to explicitly show the relation of system feedback factors to the case studies. The results show that such progressive feedback to students can be used to motivate self-improvement and enhanced team skills. The comparative group coherence can promote a better understanding of the group dynamics in order to improve team unity and fair division of team tasks.

Keywords: effective group work, improvement of team skills, progressive feedback, self and peer assessment system

Procedia PDF Downloads 191
283 Model Tests on Geogrid-Reinforced Sand-Filled Embankments with a Cover Layer under Cyclic Loading

Authors: Ma Yuan, Zhang Mengxi, Akbar Javadi, Chen Longqing

Abstract:

The structure of sand-filled embankment with cover layer is treated with tipping clay modified with lime on the outside of the packing, and the geotextile is placed between the stuffing and the clay. The packing is usually river sand, and the improved clay protects the sand core against rainwater erosion. The sand-filled embankment with cover layer has practical problems such as high filling embankment, construction restriction, and steep slope. The reinforcement can be applied to the sand-filled embankment with cover layer to solve the complicated problems such as irregular settlement caused by poor stability of the embankment. At present, the research on the sand-filled embankment with cover layer mainly focuses on the sand properties, construction technology, and slope stability, and there are few studies in the experimental field, the deformation characteristics and stability of reinforced sand-filled embankment need further study. In addition, experimental research is relatively rare when the cyclic load is considered in tests. A subgrade structure of geogrid-reinforced sand-filled embankment with cover layer was proposed. The mechanical characteristics, the deformation properties, reinforced behavior and the ultimate bearing capacity of the embankment structure under cyclic loading were studied. For this structure, the geogrids in the sand and the tipping soil are through the geotextile which is arranged in sections continuously so that the geogrids can cross horizontally. Then, the Unsaturated/saturated Soil Triaxial Test System of Geotechnical Consulting and Testing Systems (GCTS), USA was modified to form the loading device of this test, and strain collector was used to measuring deformation and earth pressure of the embankment. A series of cyclic loading model tests were conducted on the geogrid-reinforced sand-filled embankment with a cover layer under a different number of reinforcement layers, the length of reinforcement and thickness of the cover layer. The settlement of the embankment, the normal cumulative deformation of the slope and the earth pressure were studied under different conditions. Besides cyclic loading model tests, model experiments of embankment subjected cyclic-static loading was carried out to analyze ultimate bearing capacity with different loading. The experiment results showed that the vertical cumulative settlement under long-term cyclic loading increases with the decrease of the number of reinforcement layers, length of the reinforcement arrangement and thickness of the tipping soil. Meanwhile, these three factors also have an influence on the decrease of the normal deformation of the embankment slope. The earth pressure around the loading point is significantly affected by putting geogrid in a model embankment. After cyclic loading, the decline of ultimate bearing capacity of the reinforced embankment can be effectively reduced, which is contrary to the unreinforced embankment.

Keywords: cyclic load; geogrid; reinforcement behavior; cumulative deformation; earth pressure

Procedia PDF Downloads 123
282 Mechanical Testing of Composite Materials for Monocoque Design in Formula Student Car

Authors: Erik Vassøy Olsen, Hirpa G. Lemu

Abstract:

Inspired by the Formula-1 competition, IMechE (Institute of Mechanical Engineers) and Formula SAE (Society of Mechanical Engineers) organize annual competitions for University and College students worldwide to compete with a single-seat race car they have designed and built. The design of the chassis or the frame is a key component of the competition because the weight and stiffness properties are directly related with the performance of the car and the safety of the driver. In addition, a reduced weight of the chassis has a direct influence on the design of other components in the car. Among others, it improves the power to weight ratio and the aerodynamic performance. As the power output of the engine or the battery installed in the car is limited to 80 kW, increasing the power to weight ratio demands reduction of the weight of the chassis, which represents the major part of the weight of the car. In order to reduce the weight of the car, ION Racing team from the University of Stavanger, Norway, opted for a monocoque design. To ensure fulfilment of the above-mentioned requirements of the chassis, the monocoque design should provide sufficient torsional stiffness and absorb the impact energy in case of a possible collision. The study reported in this article is based on the requirements for Formula Student competition. As part of this study, diverse mechanical tests were conducted to determine the mechanical properties and performances of the monocoque design. Upon a comprehensive theoretical study of the mechanical properties of sandwich composite materials and the requirements of monocoque design in the competition rules, diverse tests were conducted including 3-point bending test, perimeter shear test and test for absorbed energy. The test panels were homemade and prepared with an equivalent size of the side impact zone of the monocoque, i.e. 275 mm x 500 mm so that the obtained results from the tests can be representative. Different layups of the test panels with identical core material and the same number of layers of carbon fibre were tested and compared. Influence of the core material thickness was also studied. Furthermore, analytical calculations and numerical analysis were conducted to check compliance to the stated rules for Structural Equivalency with steel grade SAE/AISI 1010. The test results were also compared with calculated results with respect to bending and torsional stiffness, energy absorption, buckling, etc. The obtained results demonstrate that the material composition and strength of the composite material selected for the monocoque design has equivalent structural properties as a welded frame and thus comply with the competition requirements. The developed analytical calculation algorithms and relations will be useful for future monocoque designs with different lay-ups and compositions.

Keywords: composite material, Formula student, ION racing, monocoque design, structural equivalence

Procedia PDF Downloads 505
281 Analysis of Aspergillus fumigatus IgG Serologic Cut-Off Values to Increase Diagnostic Specificity of Allergic Bronchopulmonary Aspergillosis

Authors: Sushmita Roy Chowdhury, Steve Holding, Sujoy Khan

Abstract:

The immunogenic responses of the lung towards the fungus Aspergillus fumigatus may range from invasive aspergillosis in the immunocompromised, fungal ball or infection within a cavity in the lung in those with structural lung lesions, or allergic bronchopulmonary aspergillosis (ABPA). Patients with asthma or cystic fibrosis are particularly predisposed to ABPA. There are consensus guidelines that have established criteria for diagnosis of ABPA, but uncertainty remains on the serologic cut-off values that would increase the diagnostic specificity of ABPA. We retrospectively analyzed 80 patients with severe asthma and evidence of peripheral blood eosinophilia ( > 500) over the last 3 years who underwent all serologic tests to exclude ABPA. Total IgE, specific IgE and specific IgG levels against Aspergillus fumigatus were measured using ImmunoCAP Phadia-100 (Thermo Fisher Scientific, Sweden). The Modified ISHAM working group 2013 criteria (obligate criteria: asthma or cystic fibrosis, total IgE > 1000 IU/ml or > 417 kU/L and positive specific IgE Aspergillus fumigatus or skin test positivity; with ≥ 2 of peripheral eosinophilia, positive specific IgG Aspergillus fumigatus and consistent radiographic opacities) was used in the clinical workup for the final diagnosis of ABPA. Patients were divided into 3 groups - definite, possible, and no evidence of ABPA. Specific IgG Aspergillus fumigatus levels were not used to assign the patients into any of the groups. Of 80 patients (males 48, females 32; mean age 53.9 years ± SD 15.8) selected for the analysis, there were 30 patients who had positive specific IgE against Aspergillus fumigatus (37.5%). 13 patients fulfilled the Modified ISHAM working group 2013 criteria of ABPA (‘definite’), while 15 patients were ‘possible’ ABPA and 52 did not fulfill the criteria (not ABPA). As IgE levels were not normally distributed, median levels were used in the analysis. Median total IgE levels of patients with definite and possible ABPA were 2144 kU/L and 2597 kU/L respectively (non-significant), while median specific IgE Aspergillus fumigatus at 4.35 kUA/L and 1.47 kUA/L respectively were significantly different (comparison of standard deviations F-statistic 3.2267, significance level p=0.040). Mean levels of IgG anti-Aspergillus fumigatus in the three groups (definite, possible and no evidence of ABPA) were compared using ANOVA (Statgraphics Centurion Professional XV, Statpoint Inc). Mean levels of IgG anti-Aspergillus fumigatus (Gm3) in definite ABPA was 125.17 mgA/L ( ± SD 54.84, with 95%CI 92.03-158.32), while mean Gm3 levels in possible and no ABPA were 18.61 mgA/L and 30.05 mgA/L respectively. ANOVA showed a significant difference between the definite group and the other groups (p < 0.001). This was confirmed using multiple range tests (Fisher's least significant difference procedure). There was no significant difference between the possible ABPA and not ABPA groups (p > 0.05). The study showed that a sizeable proportion of patients with asthma are sensitized to Aspergillus fumigatus in this part of India. A higher cut-off value of Gm3 ≥ 80 mgA/L provides a higher serologic specificity towards definite ABPA. Long-term studies would provide us more information if those patients with 'possible' APBA and positive Gm3 later develop clear ABPA, and are different from the Gm3 negative group in this respect. Serologic testing with clear defined cut-offs are a valuable adjunct in the diagnosis of ABPA.

Keywords: allergic bronchopulmonary aspergillosis, Aspergillus fumigatus, asthma, IgE level

Procedia PDF Downloads 212
280 ChatGPT Performs at the Level of a Third-Year Orthopaedic Surgery Resident on the Orthopaedic In-training Examination

Authors: Diane Ghanem, Oscar Covarrubias, Michael Raad, Dawn LaPorte, Babar Shafiq

Abstract:

Introduction: Standardized exams have long been considered a cornerstone in measuring cognitive competency and academic achievement. Their fixed nature and predetermined scoring methods offer a consistent yardstick for gauging intellectual acumen across diverse demographics. Consequently, the performance of artificial intelligence (AI) in this context presents a rich, yet unexplored terrain for quantifying AI's understanding of complex cognitive tasks and simulating human-like problem-solving skills. Publicly available AI language models such as ChatGPT have demonstrated utility in text generation and even problem-solving when provided with clear instructions. Amidst this transformative shift, the aim of this study is to assess ChatGPT’s performance on the orthopaedic surgery in-training examination (OITE). Methods: All 213 OITE 2021 web-based questions were retrieved from the AAOS-ResStudy website. Two independent reviewers copied and pasted the questions and response options into ChatGPT Plus (version 4.0) and recorded the generated answers. All media-containing questions were flagged and carefully examined. Twelve OITE media-containing questions that relied purely on images (clinical pictures, radiographs, MRIs, CT scans) and could not be rationalized from the clinical presentation were excluded. Cohen’s Kappa coefficient was used to examine the agreement of ChatGPT-generated responses between reviewers. Descriptive statistics were used to summarize the performance (% correct) of ChatGPT Plus. The 2021 norm table was used to compare ChatGPT Plus’ performance on the OITE to national orthopaedic surgery residents in that same year. Results: A total of 201 were evaluated by ChatGPT Plus. Excellent agreement was observed between raters for the 201 ChatGPT-generated responses, with a Cohen’s Kappa coefficient of 0.947. 45.8% (92/201) were media-containing questions. ChatGPT had an average overall score of 61.2% (123/201). Its score was 64.2% (70/109) on non-media questions. When compared to the performance of all national orthopaedic surgery residents in 2021, ChatGPT Plus performed at the level of an average PGY3. Discussion: ChatGPT Plus is able to pass the OITE with a satisfactory overall score of 61.2%, ranking at the level of third-year orthopaedic surgery residents. More importantly, it provided logical reasoning and justifications that may help residents grasp evidence-based information and improve their understanding of OITE cases and general orthopaedic principles. With further improvements, AI language models, such as ChatGPT, may become valuable interactive learning tools in resident education, although further studies are still needed to examine their efficacy and impact on long-term learning and OITE/ABOS performance.

Keywords: artificial intelligence, ChatGPT, orthopaedic in-training examination, OITE, orthopedic surgery, standardized testing

Procedia PDF Downloads 94
279 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube

Authors: Nirjhar Dhang, S. Vinay Kumar

Abstract:

Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.

Keywords: concrete, image processing, plane strain, interfacial transition zone

Procedia PDF Downloads 242
278 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology

Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov

Abstract:

The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.

Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle

Procedia PDF Downloads 264
277 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement

Authors: Brittany Richardson, Ying Wang

Abstract:

For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.

Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments

Procedia PDF Downloads 135
276 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 310
275 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 134
274 Using Balanced Scorecard Performance Metrics in Gauging the Delivery of Stakeholder Value in Higher Education: the Assimilation of Industry Certifications within a Business Program Curriculum

Authors: Thomas J. Bell III

Abstract:

This paper explores the value of assimilating certification training within a traditional course curriculum. This innovative approach is believed to increase stakeholder value within the Computer Information System program at Texas Wesleyan University. Stakeholder value is obtained from increased job marketability and critical thinking skills that create employment-ready graduates. This paper views value as first developing the capability to earn an industry-recognized certification, which provides the student with more job placement compatibility while allowing the use of critical thinking skills in a liberal arts business program. Graduates with industry-based credentials are often given preference in the hiring process, particularly in the information technology sector. And without a pioneering curriculum that better prepares students for an ever-changing employment market, its educational value is dubiously questioned. Since certifications are trending in the hiring process, academic programs should explore the viability of incorporating certification training into teaching pedagogy and courses curriculum. This study will examine the use of the balanced scorecard across four performance dimensions (financial, customer, internal process, and innovation) to measure the stakeholder value of certification training within a traditional course curriculum. The balanced scorecard as a strategic management tool may provide insight for leveraging resource prioritization and decisions needed to achieve various curriculum objectives and long-term value while meeting multiple stakeholders' needs, such as students, universities, faculty, and administrators. The research methodology will consist of quantitative analysis that includes (1) surveying over one-hundred students in the CIS program to learn what factor(s) contributed to their certification exam success or failure, (2) interviewing representatives from the Texas Workforce Commission to identify the employment needs and trends in the North Texas (Dallas/Fort Worth) area, (3) reviewing notable Workforce Innovation and Opportunity Act publications on training trends across several local business sectors, and (4) analyzing control variables to identify specific correlations between industry alignment and job placement to determine if a correlation exists. These findings may provide helpful insight into impactful pedagogical teaching techniques and curriculum that positively contribute to certification credentialing success. And should these industry-certified students land industry-related jobs that correlate with their certification credential value, arguably, stakeholder value has been realized.

Keywords: certification exam teaching pedagogy, exam preparation, testing techniques, exam study tips, passing certification exams, embedding industry certification and curriculum alignment, balanced scorecard performance evaluation

Procedia PDF Downloads 109
273 Microbial Contamination of Cell Phones of Health Care Workers: Case Study in Mampong Municipal Government Hospital, Ghana

Authors: Francis Gyapong, Denis Yar

Abstract:

The use of cell phones has become an indispensable tool in the hospital's settings. Cell phones are used in hospitals without restrictions regardless of their unknown microbial load. However, the indiscriminate use of mobile devices, especially at health facilities, can act as a vehicle for transmitting pathogenic bacteria and other microorganisms. These potential pathogens become exogenous sources of infection for the patients and are also a potential health hazard for self and as well as family members. These are a growing problem in many health care institutions. Innovations in mobile communication have led to better patient care in diabetes, asthma, and increased in vaccine uptake via SMS. Notwithstanding, the use of cell phones can be a great potential source for nosocomial infections. Many studies reported heavy microbial contamination of cell phones among healthcare workers and communities. However, limited studies have been reported in our region on bacterial contamination on cell phones among healthcare workers. This study assessed microbial contamination of cell phones of health care workers (HCWs) at the Mampong Municipal Government Hospital (MMGH), Ghana. A cross-sectional design was used to characterize bacterial microflora on cell phones of HCWs at the MMGH. A total of thirty-five (35) swab samples of cell phones of HCWs at the Laboratory, Dental Unit, Children’s Ward, Theater and Male ward were randomly collected for laboratory examinations. A suspension of the swab samples was each streak on blood and MacConkey agar and incubated at 37℃ for 48 hours. Bacterial isolates were identified using appropriate laboratory and biochemical tests. Kirby-Bauer disc diffusion method was used to determine the antimicrobial sensitivity tests of the isolates. Data analysis was performed using SPSS version 16. All mobile phones sampled were contaminated with one or more bacterial isolates. Cell phones from the Male ward, Dental Unit, Laboratory, Theatre and Children’s ward had at least three different bacterial isolates; 85.7%, 71.4%, 57.1% and 28.6% for both Theater and Children’s ward respectively. Bacterial contaminants identified were Staphylococcus epidermidis (37%), Staphylococcus aureus (26%), E. coli (20%), Bacillus spp. (11%) and Klebsiella spp. (6 %). Except for the Children ward, E. coli was isolated at all study sites and predominant (42.9%) at the Dental Unit while Klebsiella spp. (28.6%) was only isolated at the Children’s ward. Antibiotic sensitivity testing of Staphylococcus aureus indicated that they were highly sensitive to cephalexin (89%) tetracycline (80%), gentamycin (75%), lincomycin (70%), ciprofloxacin (67%) and highly resistant to ampicillin (75%). Some of these bacteria isolated are potential pathogens and their presence on cell phones of HCWs could be transmitted to patients and their families. Hence strict hand washing before and after every contact with patient and phone be enforced to reduce the risk of nosocomial infections.

Keywords: mobile phones, bacterial contamination, patients, MMGH

Procedia PDF Downloads 106
272 Testing Nitrogen and Iron Based Compounds as an Environmentally Safer Alternative to Control Broadleaf Weeds in Turf

Authors: Simran Gill, Samuel Bartels

Abstract:

Turfgrass is an important component of urban and rural lawns and landscapes. However, broadleaf weeds such as dandelions (Taraxacum officinale) and white clovers (Trifolium repens) pose major challenges to the health and aesthetics of turfgrass fields. Chemical weed control methods, such as 2,4-D weedicides, have been widely deployed; however, their safety and environmental impacts are often debated. Alternative, environmentally friendly control methods have been considered, but experimental tests for their effectiveness have been limited. This study investigates the use and effectiveness of nitrogen and iron compounds as nutrient management methods of weed control. In a two-phase experiment, the first conducted on a blend of cool season turfgrasses in plastic containers, the blend included Perennial ryegrass (Lolium perenne), Kentucky bluegrass (Poa pratensis) and Creeping red fescue (Festuca rubra) grown under controlled conditions in the greenhouse, involved the application of different combinations of nitrogen (urea and ammonium sulphate) and iron (chelated iron and iron sulphate) compounds and their combinations (urea × chelated iron, urea × iron sulphate, ammonium sulphate × chelated iron, ammonium sulphate × iron sulphate) contrasted with chemical 2, 4-D weedicide and a control (no application) treatment. There were three replicates of each of the treatments, resulting in a total of 30 treatment combinations. The parameters assessed during weekly data collection included a visual quality rating of weeds (nominal scale of 0-9), number of leaves, longest leaf span, number of weeds, chlorophyll fluorescence of grass, the visual quality rating of grass (0-9), and the weight of dried grass clippings. The results drawn from the experiment conducted over the period of 12 weeks, with three applications each at an interval of every 4 weeks, stated that the combination of ammonium sulphate and iron sulphate appeared to be most effective in halting the growth and establishment of dandelions and clovers while it also improved turf health. The second phase of the experiment, which involved the ammonium sulphate × iron sulphate, weedicide, and control treatments, was conducted outdoors on already established perennial turf with weeds under natural field conditions. After 12 weeks of observation, the results were comparable among the treatments in terms of weed control, but the ammonium sulphate × iron sulphate treatment fared much better in terms of the improved visual quality of the turf and other quality ratings. Preliminary results from these experiments thus suggest that nutrient management based on nitrogen and iron compounds could be a useful environmentally friendly alternative for controlling broadleaf weeds and improving the health and quality of turfgrass.

Keywords: broadleaf weeds, nitrogen, iron, turfgrass

Procedia PDF Downloads 77
271 Inhibition of Food Borne Pathogens by Bacteriocinogenic Enterococcus Strains

Authors: Neha Farid

Abstract:

Due to the abuse of antimicrobial medications in animal feed, the occurrence of multi-drug resistant (MDR) pathogens in foods is currently a growing public health concern on a global scale. MDR infections have the potential to penetrate the food chain by posing a serious risk to both consumers and animals. Food pathogens are those biological agents that have the tendency to cause pathogenicity in the host body upon ingestion. The major reservoirs of foodborne pathogens include food-producing fauna like cows, pigs, goats, sheep, deer, etc. The intestines of these animals are highly condensed with several different types of food pathogens. Bacterial food pathogens are the main cause of foodborne disease in humans; almost 66% of the reported cases of food illness in a year are caused by the infestation of bacterial food pathogens. When ingested, these pathogens reproduce and survive or form different kinds of toxins inside host cells causing severe infections. The genus Listeria consists of gram-positive, rod-shaped, non-spore-forming bacteria. The disease caused by Listeria monocytogenes is listeriosis or gastroenteritis, which induces fever, vomiting, and severe diarrhea in the affected body. Campylobacter jejuni is a gram-negative, curved-rod-shaped bacteria causing foodborne illness. The major source of Campylobacter jejuni is livestock and poultry; particularly, chicken is highly colonized with Campylobacter jejuni. Serious public health concerns include the widespread growth of bacteria that are resistant to antibiotics and the slowing in the discovery of new classes of medicines. The objective of this study is to provide some potential antibacterial activities with certain broad-range antibiotics and our desired bacteriocins, i.e., Enterococcus faecium from specific strains preventing microbial contamination pathways in order to safeguard the food by lowering food deterioration, contamination, and foodborne illnesses. The food pathogens were isolated from various sources of dairy products and meat samples. The isolates were tested for the presence of Listeria and Campylobacter by gram staining and biochemical testing. They were further sub-cultured on selective media enriched with the growth supplements for Listeria and Campylobacter. All six strains of Listeria and Campylobacter were tested against ten antibiotics. Campylobacter strains showed resistance against all the antibiotics, whereas Listeria was found to be resistant only against Nalidixic Acid and Erythromycin. Further, the strains were tested against the two bacteriocins isolated from Enterococcus faecium. It was found that bacteriocins showed better antimicrobial activity against food pathogens. They can be used as a potential antimicrobial for food preservation. Thus, the study concluded that natural antimicrobials could be used as alternatives to synthetic antimicrobials to overcome the problem of food spoilage and severe food diseases.

Keywords: food pathogens, listeria, campylobacter, antibiotics, bacteriocins

Procedia PDF Downloads 73
270 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation

Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma

Abstract:

Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.

Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling

Procedia PDF Downloads 144
269 Technology Management for Early Stage Technologies

Authors: Ming Zhou, Taeho Park

Abstract:

Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.

Keywords: technology management, early stage technology, patent, decision

Procedia PDF Downloads 344
268 Possibilities of Psychodiagnostics in the Context of Highly Challenging Situations in Military Leadership

Authors: Markéta Chmelíková, David Ullrich, Iva Burešová

Abstract:

The paper maps the possibilities and limits of diagnosing selected personality and performance characteristics of military leadership and psychology students in the context of coping with challenging situations. Individuals vary greatly inter-individually in their ability to effectively manage extreme situations, yet existing diagnostic tools are often criticized mainly for their low predictive power. Nowadays, every modern army focuses primarily on the systematic minimization of potential risks, including the prediction of desirable forms of behavior and the performance of military commanders. The context of military leadership is well known for its life-threatening nature. Therefore, it is crucial to research stress load in the specific context of military leadership for the purpose of possible anticipation of human failure in managing extreme situations of military leadership. The aim of the submitted pilot study, using an experiment of 24 hours duration, is to verify the possibilities of a specific combination of psychodiagnostic to predict people who possess suitable equipment for coping with increased stress load. In our pilot study, we conducted an experiment of 24 hours duration with an experimental group (N=13) in the bomb shelter and a control group (N=11) in a classroom. Both groups were represented by military leadership students (N=11) and psychology students (N=13). Both groups were equalized in terms of study type and gender. Participants were administered the following test battery of personality characteristics: Big Five Inventory 2 (BFI-2), Short Dark Triad (SD-3), Emotion Regulation Questionnaire (ERQ), Fatigue Severity Scale (FSS), and Impulsive Behavior Scale (UPPS-P). This test battery was administered only once at the beginning of the experiment. Along with this, they were administered a test battery consisting of the Test of Attention (d2) and the Bourdon test four times overall with 6 hours ranges. To better simulate an extreme situation – we tried to induce sleep deprivation - participants were required to try not to fall asleep throughout the experiment. Despite the assumption that a stay in an underground bomb shelter will manifest in impaired cognitive performance, this expectation has been significantly confirmed in only one measurement, which can be interpreted as marginal in the context of multiple testing. This finding is a fundamental insight into the issue of stress management in extreme situations, which is crucial for effective military leadership. The results suggest that a 24-hour stay in a shelter, together with sleep deprivation, does not seem to simulate sufficient stress for an individual, which would be reflected in the level of cognitive performance. In the context of these findings, it would be interesting in future to extend the diagnostic battery with physiological indicators of stress, such as: heart rate, stress score, physical stress, mental stress ect.

Keywords: bomb shelter, extreme situation, military leadership, psychodiagnostic

Procedia PDF Downloads 92
267 Religiosity and Involvement in Purchasing Convenience Foods: Using Two-Step Cluster Analysis to Identify Heterogenous Muslim Consumers in the UK

Authors: Aisha Ijaz

Abstract:

The paper focuses on the impact of Muslim religiosity on convenience food purchases and involvement experienced in a non-Muslim culture. There is a scarcity of research on the purchasing patterns of Muslim diaspora communities residing in risk societies, particularly in contexts where there is an increasing inclination toward industrialized food items alongside a renewed interest in the concept of natural foods. The United Kingdom serves as an appropriate setting for this study due to the increasing Muslim population in the country, paralleled by the expanding Halal Food Market. A multi-dimensional framework is proposed, testing for five forms of involvement, specifically Purchase Decision Involvement, Product Involvement, Behavioural Involvement, Intrinsic Risk and Extrinsic Risk. Quantitative cross-sectional consumer data were collected through a face-to-face survey contact method with 141 Muslims during the summer of 2020 in Liverpool located in the Northwest of England. proportion formula was utilitsed, and the population of interest was stratified by gender and age before recruitment took place through local mosques and community centers. Six input variables were used (intrinsic religiosity and involvement dimensions), dividing the sample into 4 clusters using the Two-Step Cluster Analysis procedure in SPSS. Nuanced variances were observed in the type of involvement experienced by religiosity group, which influences behaviour when purchasing convenience food. Four distinct market segments were identified: highly religious ego-involving (39.7%), less religious active (26.2%), highly religious unaware (16.3%), less religious concerned (17.7%). These segments differ significantly with respects to their involvement, behavioural variables (place of purchase and information sources used), socio-cultural (acculturation and social class), and individual characteristics. Choosing the appropriate convenience food is centrally related to the value system of highly religious ego-involving first-generation Muslims, which explains their preference for shopping at ethnic food stores. Less religious active consumers are older and highly alert in information processing to make the optimal food choice, relying heavily on product label sources. Highly religious unaware Muslims are less dietary acculturated to the UK diet and tend to rely on digital and expert advice sources. The less-religious concerned segment, who are typified by younger age and third generation, are engaged with the purchase process because they are worried about making unsuitable food choices. Research implications are outlined and potential avenues for further explorations are identified.

Keywords: consumer behaviour, consumption, convenience food, religion, muslims, UK

Procedia PDF Downloads 58
266 Hybrid Living: Emerging Out of the Crises and Divisions

Authors: Yiorgos Hadjichristou

Abstract:

The paper will focus on the hybrid living typologies which are brought about due to the Global Crisis. Mixing of the generations and the groups of people, mingling the functions of living with working and socializing, merging the act of living in synergy with the urban realm and its constituent elements will be the springboard of proposing an essential sustainable housing approach and the respective urban development. The thematic will be based on methodologies developed both on the academic, educational environment including participation of students’ research and on the practical aspect of architecture including case studies executed by the author in the island of Cyprus. Both paths of the research will deal with the explorative understanding of the hybrid ways of living, testing the limits of its autonomy. The evolution of the living typologies into substantial hybrid entities, will deal with the understanding of new ways of living which include among others: re-introduction of natural phenomena, accommodation of the activity of work and services in the living realm, interchange of public and private, injections of communal events into the individual living territories. The issues and the binary questions raised by what is natural and artificial, what is private and what public, what is ephemeral and what permanent and all the in-between conditions are eloquently traced in the everyday life in the island. Additionally, given the situation of Cyprus with the eminent scar of the dividing ‘Green line’ and the waiting of the ‘ghost city’ of Famagusta to be resurrected, the conventional way of understanding the limits and the definitions of the properties is irreversibly shaken. The situation is further aggravated by the unprecedented phenomenon of the crisis on the island. All these observations set the premises of reexamining the urban development and the respective sustainable housing in a synergy where their characteristics start exchanging positions, merge into each other, contemporarily emerge and vanish, changing from permanent to ephemeral. This fluidity of conditions will attempt to render a future of the built- and unbuilt realm where the main focusing point will be redirected to the human and the social. Weather and social ritual scenographies together with ‘spontaneous urban landscapes’ of ‘momentary relationships’ will suggest a recipe for emerging urban environments and sustainable living. Thus, the paper will aim at opening a discourse on the future of the sustainable living merged in a sustainable urban development in relation to the imminent solution of the division of island, where the issue of property became the main obstacle to be overcome. At the same time, it will attempt to link this approach to the global need for a sustainable evolution of the urban and living realms.

Keywords: social ritual scenographies, spontaneous urban landscapes, substantial hybrid entities, re-introduction of natural phenomena

Procedia PDF Downloads 265
265 Examination of Porcine Gastric Biomechanics in the Antrum Region

Authors: Sif J. Friis, Mette Poulsen, Torben Strom Hansen, Peter Herskind, Jens V. Nygaard

Abstract:

Gastric biomechanics governs a large range of scientific and engineering fields, from gastric health issues to interaction mechanisms between external devices and the tissue. Determination of mechanical properties of the stomach is, thus, crucial, both for understanding gastric pathologies as well as for the development of medical concepts and device designs. Although the field of gastric biomechanics is emerging, advances within medical devices interacting with the gastric tissue could greatly benefit from an increased understanding of tissue anisotropy and heterogeneity. Thus, in this study, uniaxial tensile tests of gastric tissue were executed in order to study biomechanical properties within the same individual as well as across individuals. With biomechanical tests in the strain domain, tissue from the antrum region of six porcine stomachs was tested using eight samples from each stomach (n = 48). The samples were cut so that they followed dominant fiber orientations. Accordingly, from each stomach, four samples were longitudinally oriented, and four samples were circumferentially oriented. A step-wise stress relaxation test with five incremental steps up to 25 % strain with 200 s rest periods for each step was performed, followed by a 25 % strain ramp test with three different strain rates. Theoretical analysis of the data provided stress-strain/time curves as well as 20 material parameters (e.g., stiffness coefficients, dissipative energy densities, and relaxation time coefficients) used for statistical comparisons between samples from the same stomach as well as in between stomachs. Results showed that, for the 20 material parameters, heterogeneity across individuals, when extracting samples from the same area, was in the same order of variation as the samples within the same stomach. For samples from the same stomach, the mean deviation percentage for all 20 parameters was 21 % and 18 % for longitudinal and circumferential orientations, compared to 25 % and 19 %, respectively, for samples across individuals. This observation was also supported by a nonparametric one-way ANOVA analysis, where results showed that the 20 material parameters from each of the six stomachs came from the same distribution with a level of statistical significance of P > 0.05. Direction-dependency was also examined, and it was found that the maximum stress for longitudinal samples was significantly higher than for circumferential samples. However, there were no significant differences in the 20 material parameters, with the exception of the equilibrium stiffness coefficient (P = 0.0039) and two other stiffness coefficients found from the relaxation tests (P = 0.0065, 0.0374). Nor did the stomach tissue show any significant differences between the three strain-rates used in the ramp test. Heterogeneity within the same region has not been examined earlier, yet, the importance of the sampling area has been demonstrated in this study. All material parameters found are essential to understand the passive mechanics of the stomach and may be used for mathematical and computational modeling. Additionally, an extension of the protocol used may be relevant for compiling a comparative study between the human stomach and the pig stomach.

Keywords: antrum region, gastric biomechanics, loading-unloading, stress relaxation, uniaxial tensile testing

Procedia PDF Downloads 434
264 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 134
263 Accidental U.S. Taxpayers Residing Abroad: Choosing between U.S. Citizenship or Keeping Their Local Investment Accounts

Authors: Marco Sewald

Abstract:

Due to the current enforcement of exterritorial U.S. legislation, up to 9 million U.S. (dual) citizens residing abroad are subject to U.S. double and surcharge taxation and at risk of losing access to otherwise basic financial services and investment opportunities abroad. The United States is the only OECD country that taxes non-resident citizens, lawful permanent residents and other non-resident aliens on their worldwide income, based on local U.S. tax laws. To enforce these policies the U.S. has implemented ‘saving clauses’ in all tax treaties and implemented several compliance provisions, including the Foreign Account Tax Compliance Act (FATCA), Qualified Intermediaries Agreements (QI) and Intergovernmental Agreements (IGA) addressing Foreign Financial Institutions (FFIs) to implement these provisions in foreign jurisdictions. This policy creates systematic cases of double and surcharge taxation. The increased enforcement of compliance rules is creating additional report burdens for U.S. persons abroad and FFIs accepting such U.S. persons as customers. FFIs in Europe react with a growing denial of specific financial services to this population. The numbers of U.S. citizens renouncing has dramatically increased in the last years. A case study is chosen as an appropriate methodology and research method, as being an empirical inquiry that investigates a contemporary phenomenon within its real-life context; when the boundaries between phenomenon and context are not clearly evident; and in which multiple sources of evidence are used. This evaluative approach is testing whether the combination of policies works in practice, or whether they are in accordance with desirable moral, political, economical aims, or may serve other causes. The research critically evaluates the financial and non-financial consequences and develops sufficient strategies. It further discusses these strategies to avoid the undesired consequences of exterritorial U.S. legislation. Three possible strategies are resulting from the use cases: (1) Duck and cover, (2) Pay U.S. double/surcharge taxes, tax preparing fees and accept imposed product limitations and (3) Renounce U.S. citizenship and pay possible exit taxes, tax preparing fees and the requested $2,350 fee to renounce. While the first strategy is unlawful and therefore unsuitable, the second strategy is only suitable if the U.S. citizen residing abroad is planning to move to the U.S. in the future. The last strategy is the only reasonable and lawful way provided by the U.S. to limit the exposure to U.S. double and surcharge taxation and the limitations on financial products. The results are believed to add a perspective to the current academic discourse regarding U.S. citizenship based taxation, currently dominated by U.S. scholars, while providing sufficient strategies for the affected population at the same time.

Keywords: citizenship based taxation, FATCA, FBAR, qualified intermediaries agreements, renounce U.S. citizenship

Procedia PDF Downloads 203
262 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis

Authors: Mohamed Ali Abdennadher

Abstract:

Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.

Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology

Procedia PDF Downloads 40
261 A Clinical Audit on Screening Women with Subfertility Using Transvaginal Scan and Hysterosalpingo Contrast Sonography

Authors: Aarti M. Shetty, Estela Davoodi, Subrata Gangooly, Anita Rao-Coppisetty

Abstract:

Background: Testing Patency of Fallopian Tubes is among one of the several protocols for investigating Subfertile Couples. Both, Hysterosalpingogram (HSG) and Laparoscopy and dye test have been used as Tubal patency test for several years, with well-known limitation. Hysterosalpingo Contrast Sonography (HyCoSy) can be used as an alternative tool to HSG, to screen patency of Fallopian tubes, with an advantage of being non-ionising, and also, use of transvaginal scan to diagnose pelvic pathology. Aim: To determine the indication and analyse the performance of transvaginal scan and HyCoSy in Broomfield Hospital. Methods: We retrospectively analysed fertility workup of 282 women, who attended HyCoSy clinic at our institution from January 2015 to June 2016. An Audit proforma was designed, to aid data collection. Data was collected from patient notes and electronic records, which included patient demographics; age, parity, type of subfertility (primary or secondary), duration of subfertility, past medical history and base line investigation (hormone profile and semen analysis). Findings of the transvaginal scan, HyCoSy and Laparoscopy were also noted. Results: The most common indication for referral were as a part of primary fertility workup on couples who had failure to conceive despite intercourse for a year, other indication for referral were recurrent miscarriage, history of ectopic pregnancy, post reversal of sterilization(vasectomy and tuboplasty), Post Gynaecology surgery(Loop excision, cone biopsy) and amenorrhea. Basic Fertility workup showed 34% men had abnormal semen analysis. HyCoSy was successfully completed in 270 (95%) women using ExEm foam and Transvaginal Scan. Of the 270 patients, 535 tubes were examined in total. 495/535 (93%) tubes were reported as patent, 40/535 (7.5%) tubes were reported as blocked. A total of 17 (6.3%) patients required laparoscopy and dye test after HyCoSy. In these 17 patients, 32 tubes were examined under laparoscopy, and 21 tubes had findings similar to HyCoSy, with a concordance rate of 65%. In addition to this, 41 patients had some form of pelvic pathology (endometrial polyp, fibroid, cervical polyp, fibroid, bicornuate uterus) detected during transvaginal scan, who referred to corrective surgeries after attending HyCoSy Clinic. Conclusion: Our audit shows that HyCoSy and Transvaginal scan can be a reliable screening test for low risk women. Furthermore, it has competitive diagnostic accuracy to HSG in identifying tubal patency, with an additional advantage of screening for pelvic pathology. With addition of 3D Scan, pulse Doppler and other non-invasive imaging modality, HyCoSy may potentially replace Laparoscopy and chromopertubation in near future.

Keywords: hysterosalpingo contrast sonography (HyCoSy), transvaginal scan, tubal infertility, tubal patency test

Procedia PDF Downloads 252
260 Analyzing the Effects of Bio-fibers on the Stiffness and Strength of Adhesively Bonded Thermoplastic Bio-fiber Reinforced Composites by a Mixed Experimental-Numerical Approach

Authors: Sofie Verstraete, Stijn Debruyne, Frederik Desplentere

Abstract:

Considering environmental issues, the interest to apply sustainable materials in industry increases. Specifically for composites, there is an emerging need for suitable materials and bonding techniques. As an alternative to traditional composites, short bio-fiber (cellulose-based flax) reinforced Polylactic Acid (PLA) is gaining popularity. However, these thermoplastic based composites show issues in adhesive bonding. This research focusses on analyzing the effects of the fibers near the bonding interphase. The research applies injection molded plate structures. A first important parameter concerns the fiber volume fraction, which directly affects adhesion characteristics of the surface. This parameter is varied between 0 (pure PLA) and 30%. Next to fiber volume fraction, the orientation of fibers near the bonding surface governs the adhesion characteristics of the injection molded parts. This parameter is not directly controlled in this work, but its effects are analyzed. Surface roughness also greatly determines surface wettability, thus adhesion. Therefore, this research work considers three different roughness conditions. Different mechanical treatments yield values up to 0.5 mm. In this preliminary research, only one adhesive type is considered. This is a two-part epoxy which is cured at 23 °C for 48 hours. In order to assure a dedicated parametric study, simple and reproduceable adhesive bonds are manufactured. Both single lap (substrate width 25 mm, thickness 3 mm, overlap length 10 mm) and double lap tests are considered since these are well documented and quite straightforward to conduct. These tests are conducted for the different substrate and surface conditions. Dog bone tensile testing is applied to retrieve the stiffness and strength characteristics of the substrates (with different fiber volume fractions). Numerical modelling (non-linear FEA) relates the effects of the considered parameters on the stiffness and strength of the different joints, obtained through the abovementioned tests. Ongoing work deals with developing dedicated numerical models, incorporating the different considered adhesion parameters. Although this work is the start of an extensive research project on the bonding characteristics of thermoplastic bio-fiber reinforced composites, some interesting results are already prominent. Firstly, a clear correlation between the surface roughness and the wettability of the substrates is observed. Given the adhesive type (and viscosity), it is noticed that an increase in surface energy is proportional to the surface roughness, to some extent. This becomes more pronounced when fiber volume fraction increases. Secondly, ultimate bond strength (single lap) also increases with increasing fiber volume fraction. On a macroscopic level, this confirms the positive effect of fibers near the adhesive bond line.

Keywords: adhesive bonding, bio-fiber reinforced composite, flax fibers, lap joint

Procedia PDF Downloads 129
259 Testing Depression in Awareness Space: A Proposal to Evaluate Whether a Psychotherapeutic Method Based on Spatial Cognition and Imagination Therapy Cures Moderate Depression

Authors: Lucas Derks, Christine Beenhakker, Michiel Brandt, Gert Arts, Ruud van Langeveld

Abstract:

Background: The method Depression in Awareness Space (DAS) is a psychotherapeutic intervention technique based on the principles of spatial cognition and imagination therapy with spatial components. The basic assumptions are: mental space is the primary organizing principle in the mind, and all psychological issues can be treated by first locating and by next relocating the conceptualizations involved. The most clinical experience was gathered over the last 20 years in the area of social issues (with the social panorama model). The latter work led to the conclusion that a mental object (image) gains emotional impact when it is placed more central, closer and higher in the visual field – and vice versa. Changing the locations of mental objects in space thus alters the (socio-) emotional meaning of the relationships. The experience of depression seems always associated with darkness. Psychologists tend to see the link between depression and darkness as a metaphor. However, clinical practice hints to the existence of more literal forms of darkness. Aims: The aim of the method Depression in Awareness Space is to reduce the distress of clients with depression in the clinical counseling practice, as a reliable alternative method of psychological therapy for the treatment of depression. The method Depression in Awareness Space aims at making dark areas smaller, lighter and more transparent in order to identify the problem or the cause of the depression which lies behind the darkness. It was hypothesized that the darkness is a subjective side-effect of the neurological process of repression. After reducing the dark clouds the real problem behind the depression becomes more visible, allowing the client to work on it and in that way reduce their feelings of depression. This makes repression of the issue obsolete. Results: Clients could easily get into their 'sadness' when asked to do so and finding the location of the dark zones proved pretty easy as well. In a recent pilot study with five participants with mild depressive symptoms (measured on two different scales and tested against an untreated control group with similar symptoms), the first results were also very promising. If the mental spatial approach to depression can be proven to be really effective, this would be very good news. The Society of Mental Space Psychology is now looking for sponsoring of an up scaled experiment. Conclusions: For spatial cognition and the research into spatial psychological phenomena, the discovery of dark areas can be a step forward. Beside out of pure scientific interest, it is great to know that this discovery has a clinical implication: when darkness can be connected to depression. Also, darkness seems to be more than metaphorical expression. Progress can be monitored over measurement tools that quantify the level of depressive symptoms and by reviewing the areas of darkness.

Keywords: depression, spatial cognition, spatial imagery, social panorama

Procedia PDF Downloads 171
258 Partisan Agenda Setting in Digital Media World

Authors: Hai L. Tran

Abstract:

Previous research on agenda setting effects has often focused on the top-down influence of the media at the aggregate level, while overlooking the capacity of audience members to select media and content to fit their individual dispositions. The decentralized characteristics of online communication and digital news create more choices and greater user control, thereby enabling each audience member to seek out a unique blend of media sources, issues, and elements of messages and to mix them into a coherent individual picture of the world. This study examines how audiences use media differently depending on their prior dispositions, thereby making sense of the world in ways that are congruent with their preferences and cognitions. The current undertaking is informed by theoretical frameworks from two distinct lines of scholarship. According to the ideological migration hypothesis, individuals choose to live in communities with ideologies like their own to satisfy their need to belong. One tends to move away from Zip codes that are incongruent and toward those that are more aligned with one’s ideological orientation. This geographical division along ideological lines has been documented in social psychology research. As an extension of agenda setting, the agendamelding hypothesis argues that audiences seek out information in attractive media and blend them into a coherent narrative that fits with a common agenda shared by others, who think as they do and communicate with them about issues of public. In other words, individuals, through their media use, identify themselves with a group/community that they want to join. Accordingly, the present study hypothesizes that because ideology plays a role in pushing people toward a physical community that fits their need to belong, it also leads individuals to receive an idiosyncratic blend of media and be influenced by such selective exposure in deciding what issues are more relevant. Consequently, the individualized focus of media choices impacts how audiences perceive political news coverage and what they know about political issues. The research project utilizes recent data from The American Trends Panel survey conducted by Pew Research Center to explore the nuanced nature of agenda setting at the individual level and amid heightened polarization. Hypothesis testing is performed with both nonparametric and parametric procedures, including regression and path analysis. This research attempts to explore the media-public relationship from a bottom-up approach, considering the ability of active audience members to select among media in a larger process that entails agenda setting. It helps encourage agenda-setting scholars to further examine effects at the individual, rather than aggregate, level. In addition to theoretical contributions, the study’s findings are useful for media professionals in building and maintaining relationships with the audience considering changes in market share due to the spread of digital and social media.

Keywords: agenda setting, agendamelding, audience fragmentation, ideological migration, partisanship, polarization

Procedia PDF Downloads 61
257 The Social Ecology of Serratia entomophila: Pathogen of Costelytra giveni

Authors: C. Watson, T. Glare, M. O'Callaghan, M. Hurst

Abstract:

The endemic New Zealand grass grub (Costelytra giveni, Coleoptera: Scarabaeidae) is an economically significant grassland pest in New Zealand. Due to their impacts on production within the agricultural sector, one of New Zealand's primary industries, several methods are being used to either control or prevent the establishment of new grass grub populations in the pasture. One such method involves the use of a biopesticide based on the bacterium Serratia entomophila. This species is one of the causative agents of amber disease, a chronic disease of the larvae which results in death via septicaemia after approximately 2 to 3 months. The ability of S. entomophila to cause amber disease is dependant upon the presence of the amber disease associated plasmid (pADAP), which encodes for the key virulence determinants required for the establishment and maintenance of the disease. Following the collapse of grass grub populations within the soil, resulting from either natural population build-up or application of the bacteria, non-pathogenic plasmid-free Serratia strains begin to predominate within the soil. Whilst the interactions between S. entomophila and grass grub larvae are well studied, less information is known on the interactions between plasmid-bearing and plasmid-free strains, particularly the potential impact of these interactions upon the efficacy of an applied biopesticide. Using a range of constructed strains with antibiotic tags, in vitro (broth culture) and in vivo (soil and larvae) experiments were conducted using inoculants comprised of differing ratios of isogenic pathogenic and non-pathogenic Serratia strains, enabling the relative growth of pADAP+ and pADAP- strains under competition conditions to be assessed. In nutrient-rich, the non-pathogenic pADAP- strain outgrew the pathogenic pADAP+ strain by day 3 when inoculated in equal quantities, and by day 5 when applied as the minority inoculant, however, there was an overall gradual decline in the number of viable bacteria for both strains over a 7-day period. Similar results were obtained in additional experiments using the same strains and continuous broth cultures re-inoculated at 24-hour intervals, although in these cultures, the viable cell count did not diminish over the 7-day period. When the same ratios were assessed in soil microcosms with limited available nutrients, the strains remained relatively stable over a 2-month period. Additionally, in vivo grass grub co-infections assays using the same ratios of tagged Serratia strains revealed similar results to those observed in the soil, but there was also evidence of horizontal transfer of pADAP from the pathogenic to the non-pathogenic strain within the larval gut after a period of 4 days. Whilst the influence of competition is more apparent in broth cultures than within the soil or larvae, further testing is required to determine whether this competition between pathogenic and non-pathogenic Serratia strains has any influence on efficacy and disease progression, and how this may impact on the ability of S. entomophila to cause amber disease within grass grub larvae when applied as a biopesticide.

Keywords: biological control, entomopathogen, microbial ecology, New Zealand

Procedia PDF Downloads 157