Search results for: dependent
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2567

Search results for: dependent

407 Development of a Multi-Variate Model for Matching Plant Nitrogen Requirements with Supply for Reducing Losses in Dairy Systems

Authors: Iris Vogeler, Rogerio Cichota, Armin Werner

Abstract:

Dairy farms are under pressure to increase productivity while reducing environmental impacts. Effective fertiliser management practices are critical to achieve this. Determination of optimum nitrogen (N) fertilisation rates which maximise pasture growth and minimise N losses is challenging due to variability in plant requirements and likely near-future supply of N by the soil. Remote sensing can be used for mapping N nutrition status of plants and to rapidly assess the spatial variability within a field. An algorithm is, however, lacking which relates the N status of the plants to the expected yield response to additions of N. The aim of this simulation study was to develop a multi-variate model for determining N fertilisation rate for a target percentage of the maximum achievable yield based on the pasture N concentration (ii) use of an algorithm for guiding fertilisation rates, and (iii) evaluation of the model regarding pasture yield and N losses, including N leaching, denitrification and volatilisation. A simulation study was carried out using the Agricultural Production Systems Simulator (APSIM). The simulations were done for an irrigated ryegrass pasture in the Canterbury region of New Zealand. A multi-variate model was developed and used to determine monthly required N fertilisation rates based on pasture N content prior to fertilisation and targets of 50, 75, 90 and 100% of the potential monthly yield. These monthly optimised fertilisation rules were evaluated by running APSIM for a ten-year period to provide yield and N loss estimates from both nonurine and urine affected areas. Comparison with typical fertilisation rates of 150 and 400 kg N/ha/year was also done. Assessment of pasture yield and leaching from fertiliser and urine patches indicated a large reduction in N losses when N fertilisation rates were controlled by the multi-variate model. However, the reduction in leaching losses was much smaller when taking into account the effects of urine patches. The proposed approach based on biophysical modelling to develop a multi-variate model for determining optimum N fertilisation rates dependent on pasture N content is very promising. Further analysis, under different environmental conditions and validation is required before the approach can be used to help adjust fertiliser management practices to temporal and spatial N demand based on the nitrogen status of the pasture.

Keywords: APSIM modelling, optimum N fertilization rate, pasture N content, ryegrass pasture, three dimensional surface response function.

Procedia PDF Downloads 128
406 Standardized Testing of Filter Systems regarding Their Separation Efficiency in Terms of Allergenic Particles and Airborne Germs

Authors: Johannes Mertl

Abstract:

Our surrounding air contains various particles. Besides typical representatives of inorganic dust, such as soot and ash, also particles originating from animals, microorganisms or plants are floating through the air, so-called bioaerosols. The group of bioaerosols consists of a broad spectrum of particles of different size, including fungi, bacteria, viruses, spores, or tree, flower and grass pollen that are of high relevance for allergy sufferers. In dependence of the environmental climate and the actual season, these allergenic particles can be found in enormous numbers in the air and are inhaled by humans via the respiration tract, with a potential for inflammatory diseases of the airways, such as asthma or allergic rhinitis. As a consequence air filter systems of ventilation and air conditioning devices are required to meet very high standards to prevent, or at least lower the number of allergens and airborne germs entering the indoor air. Still, filter systems are merely classified for their separation rates using well-defined mineral test dust, while no appropriate sufficiently standardized test methods for bioaerosols exist. However, determined separation rates for mineral test particles of a certain size cannot simply be transferred to bioaerosols, as separation efficiency of particularly fine and respirable particles (< 10 microns) is dependent not only on their shape and particle diameter, but also defined by their density and physicochemical properties. For this reason, the OFI developed a test method, which directly enables a testing of filters and filter media for their separation rates on bioaerosols, as well as a classification of filters. Besides allergens from an intact or fractured tree or grass pollen, allergenic proteins bound to particulates, as well as allergenic fungal spores (e.g. Cladosporium cladosporioides), or bacteria can be used to classify filters regarding their separation rates. Allergens passing through the filter can then be detected by highly sensitive immunological assays (ELISA) or in the case of fungal spores by microbiological methods, which allow for the detection of even one single spore passing the filter. The test procedure, which is carried out in laboratory scale, was furthermore validated regarding its sufficiency to cover real life situations by upscaling using air conditioning devices showing great conformity in terms of separation rates. Additionally, a clinical study with allergy sufferers was performed to verify analytical results. Several different air conditioning filters from the car industry have been tested, showing significant differences in their separation rates.

Keywords: airborne germs, allergens, classification of filters, fine dust

Procedia PDF Downloads 252
405 Long-Term Foam Roll Intervention Study of the Effects on Muscle Performance and Flexibility

Authors: T. Poppendieker

Abstract:

A new innovative tool for self-myofascial release is widely and increasingly used among athletes of various sports. The application of the foam roll is suggested to improve muscle performance and flexibility. Attempts to examine acute and somewhat long term effects of either have been conducted over the past ten years. However, the results of muscle performance have been inconsistent. It is suggested that regular use over a long period of time results in a different, muscle performance improving outcome. This study examines long-term effects of regular foam rolling combined with a short plyometric routine vs. solely the same plyometric routine on muscle performance and flexibility over a period of six weeks. Results of counter movement jump (CMJ), squat jump (SJ), and isometric maximal force (IMF) of a 90° horizontal squat in a leg-press will serve as parameters for muscle performance. Data on the range of motion (ROM) of the sit and reach test will be used as a parameter for the flexibility assessment. Muscle activation will be measured throughout all tests. Twenty male and twenty female members of a Frankfurt area fitness center chain (7.11) with an average age of 25 years will be recruited. Women and men will be randomly assigned to a foam roll (FR) and a control group. All participants will practice their assigned routine three times a week over the period of six weeks. Tests on CMJ, SJ, IMF, and ROM will be taken before and after the intervention period. The statistic software program SPSS 22 will be used to analyze the data of CMJ, SJ, IMF, and ROM under consideration of muscle activation by a 2 x 2 x 2 (time of measurement x gender x group) analysis of variance with repeated measures and dependent t-test analysis of pre- and post-test. The alpha level for statistic significance will be set at p ≤ 0.05. It is hypothesized that a significant difference in outcome based on gender differences in all four tests will be observed. It is further hypothesized that both groups may show significant improvements in their performance in the CMJ and SJ after the six-week period. However, the FR group is hypothesized to achieve a higher improvement in the two jump tests. Moreover, the FR group may increase IMF as well as flexibility, whereas the control group may not show likewise progress. The results of this study are crucial for the understanding of long-term effects of regular foam roll application. The collected information on the matter may help to motivate the incorporation of foam rolling into training routines, in order to improve athletic performances.

Keywords: counter movement jump, foam rolling, isometric maximal force, long term effects, self-myofascial release, squat jump

Procedia PDF Downloads 285
404 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 301
403 A Perspective of Digital Formation in the Solar Community as a Prototype for Finding Sustainable Algorithmic Conditions on Earth

Authors: Kunihisa Kakumoto

Abstract:

“Purpose”: Global environmental issues are now being raised in a global dimension. By predicting sprawl phenomena beyond the limits of nature with algorithms, we can expect to protect our social life within the limits of nature. It turns out that the sustainable state of the planet now consists in maintaining a balance between the capabilities of nature and the possibilities of our social life. The amount of water on earth is finite. Sustainability is therefore highly dependent on water capacity. A certain amount of water is stored in the forest by planting and green space, and the amount of water can be considered in relation to the green space. CO2 is also absorbed by green plants. "Possible measurements and methods": The concept of the solar community has been introduced in technical papers on the occasion of many international conferences. The solar community concept is based on data collected from one solar model house. This algorithmic study simulates the amount of water stored by lush green vegetation. In addition, we calculated and compared the amount of CO2 emissions from the Taiyo Community and the amount of CO2 reduction from greening. Based on the trial calculation results of these solar communities, we are simulating the sustainable state of the earth as an algorithm trial calculation result. We believe that we should also consider the composition of this solar community group using digital technology as control technology. "Conclusion": We consider the solar community as a prototype for finding sustainable conditions for the planet. The role of water is very important as the supply capacity of water is limited. However, the circulation of social life is not constructed according to the mechanism of nature. This simulation trial calculation is explained using the total water supply volume as an example. According to this process, algorithmic calculations consider the total capacity of the water supply and the population and habitable numbers of the area. Green vegetated land is very important to keep enough water. Green vegetation is also very important to maintain CO2 balance. A simulation trial calculation is possible from the relationship between the CO2 emissions of the solar community and the amount of CO2 reduction due to greening. In order to find this total balance and sustainable conditions, the algorithmic simulation calculation takes into account lush vegetation and total water supply. Research to find sustainable conditions is done by simulating an algorithmic model of the solar community as a prototype. In this one prototype example, it's balanced. The activities of our social life must take place within the permissive limits of natural mechanisms. Of course, we aim for a more ideal balance by utilizing auxiliary digital control technology such as AI.

Keywords: solar community, sustainability, prototype, algorithmic simulation

Procedia PDF Downloads 59
402 The Impact of CSR Satisfaction on Employee Commitment

Authors: Silke Bustamante, Andrea Pelzeter, Andreas Deckmann, Rudi Ehlscheidt, Franziska Freudenberger

Abstract:

Many companies increasingly seek to enhance their attractiveness as an employer to bind their employees. At the same time, corporate responsibility for social and ecological issues seems to become a more important part of an attractive employer brand. It enables the company to match the values and expectations of its members, to signal fairness towards them and to increase its brand potential for positive psychological identification on the employees’ side. In the last decade, several empirical studies have focused this relationship, confirming a positive effect of employees’ CSR perception and their affective organizational commitment. The current paper aims to take a slightly different view by analyzing the impact of another factor on commitment: the weighted employee’s satisfaction with the employer CSR. For that purpose, it is assumed that commitment levels are rather a result of the fulfillment or disappointment of expectations. Hence, instead of merely asking how CSR perception affects commitment, a more complex independent variable is taken into account: a weighted satisfaction construct that summarizes two different factors. Therefore, the individual level of commitment contingent on CSR is conceptualized as a function of two psychological processes: (1) the individual significance that an employee ascribes to specific employer attributes and (2) the individual satisfaction based on the fulfillment of expectation that rely on preceding perceptions of employer attributes. The results presented are based on a quantitative survey that was undertaken among employees of the German service sector. Conceptually a five-dimensional CSR construct (ecology, employees, marketplace, society and corporate governance) and a two-dimensional non-CSR construct (company and workplace) were applied to differentiate employer characteristics. (1) Respondents were asked to indicate the importance of different facets of CSR-related and non-CSR-related employer attributes. By means of a conjoint analysis, the relative importance of each employer attribute was calculated from the data. (2) In addition to this, participants stated their level of satisfaction with specific employer attributes. Both indications were merged to individually weighted satisfaction indexes on the seven-dimensional levels of employer characteristics. The affective organizational commitment of employees (dependent variable) was gathered by applying the established 15-items Organizational Commitment Questionnaire (OCQ). The findings related to the relationship between satisfaction and commitment will be presented. Furthermore, the question will be addressed, how important satisfaction with CSR is in relation to the satisfaction with other attributes of the company in the creation of commitment. Practical as well as scientific implications will be discussed especially with reference to previous results that focused on CSR perception as a commitment driver.

Keywords: corporate social responsibility, organizational commitment, employee attitudes/satisfaction, employee expectations, employer brand

Procedia PDF Downloads 265
401 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 82
400 Governing Ecosystem Services for Poverty Reduction: Empirical Evidences from Purulia District, India

Authors: Soma Sarkar

Abstract:

A number of authors have recently argued that there are strong links between ecosystem services and sustainable development, particularly development efforts that aim to reduce rural poverty. We see two distinct routes by which the science of ecosystem services can contribute to both nature conservation and sustainable development. First, a thorough accounting of ecosystem services and a better understanding of how and at what rates ecosystems produce these services can be used to motivate payment for nature conservation. At least part of the generated funds can be used to compensate people who suffer lost economic opportunities to protect these services. For example, if rural poor are asked to take actions that reduce farm productivity to protect and regulate water supply, those farmers could be compensated for the reduced productivity they experience. When the benefits of natural ecosystems are explicitly quantified, those benefits are more valued both by the people who directly interact with the ecosystems and the governmental and other agencies that would have to pay for substitute sources of these services if these ecosystems should become impaired. Appreciating the value of ecosystem services can motivate increased conservation investment to prevent having to pay for substitutes later. This approach could be characterized as a ‘‘government investment’’ approach because the payments will generally come from beneficiaries outside of the local area, and a governmental or other agency is typically responsible for collecting and redistributing the funds. Second, a focus on the conservation of ecosystem services could improve the success of projects that attempt to both conserve nature and improve the welfare of the rural poor by fostering markets for the goods and services that local people produce or extract from ecosystems. These projects could be characterized as more ‘‘community based’’ because the goal is to foster the more organic, or grassroots, development of cottage industries, such as ecotourism, or the production of non-timber forest products, that are enhanced by better protection of local ecosystems. Using this framework, we discuss the factors that may have contributed to failure or success for several projects in the district of Purulia, one of the most backward districts of India and inhabited by indigenous group of people. A large majority of people in this district are dependent on environment based incomes for their sustenance. The erosion of natural resource base owing to poor governance in the district has led to the reductions in the household incomes of these people. The scale of our analysis is local or project level. The plight of poor has little to do with the production functions of ecosystem services. But for rural poor, at the local level, the status of ecosystem services can make a big difference in their daily lives.

Keywords: ecosystem services, governance, rural poor, community based natural resource management

Procedia PDF Downloads 372
399 Data Quality and Associated Factors on Regular Immunization Programme at Ararso District: Somali Region- Ethiopia

Authors: Eyob Seife, Molla Alemayaehu, Tesfalem Teshome, Bereket Seyoum, Behailu Getachew

Abstract:

Globally, immunization averts between 2 and 3 million deaths yearly, but Vaccine-Preventable Diseases still account for more in Sub-Saharan African countries and takes the majority of under-five deaths yearly, which indicates the need for consistent and on-time information to have evidence-based decision so as to save lives of these vulnerable groups. However, ensuring data of sufficient quality and promoting an information-use culture at the point of collection remains critical and challenging, especially in remote areas where the Ararso district is selected based on a hypothesis of there is a difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Ararso district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers and reporting documents were reviewed at 4 health facilities (1 health center and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio, availability and timeliness of reports. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and at the district health office. A quality index (QI), availability and timeliness of reports were assessed. Accuracy ratios formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), TT2+ and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed poor timeliness at all levels and both over-reporting and under-reporting were observed at all levels when computing the accuracy ratio of registration to health post reports found at health centers for almost all antigens verified. A quality index (QI) of all facilities also showed poor results. Most of the verified immunization data accuracy ratios were found to be relatively better than that of quality index and timeliness of reports. So attention should be given to improving the capacity of staff, timeliness of reports and quality of monitoring system components, namely recording, reporting, archiving, data analysis and using information for decisions at all levels, especially in remote and areas.

Keywords: accuracy ratio, ararso district, quality of monitoring system, regular immunization program, timeliness of reports, Somali region-Ethiopia

Procedia PDF Downloads 67
398 Attention States in the Sustained Attention to Response Task: Effects of Trial Duration, Mind-Wandering and Focus

Authors: Aisling Davies, Ciara Greene

Abstract:

Over the past decade the phenomenon of mind-wandering in cognitive tasks has attracted widespread scientific attention. Research indicates that mind-wandering occurrences can be detected through behavioural responses in the Sustained Attention to Response Task (SART) and several studies have attributed a specific pattern of responding around an error in this task to an observable effect of a mind-wandering state. SART behavioural responses are also widely accepted as indices of sustained attention and of general attention lapses. However, evidence suggests that these same patterns of responding may be attributable to other factors associated with more focused states and that it may also be possible to distinguish the two states within the same task. To use behavioural responses in the SART to study mind-wandering, it is essential to establish both the SART parameters that would increase the likelihood of errors due to mind-wandering, and exactly what type of responses are indicative of mind-wandering, neither of which have yet been determined. The aims of this study were to compare different versions of the SART to establish which task would induce the most mind-wandering episodes and to determine whether mind-wandering related errors can be distinguished from errors during periods of focus, by behavioural responses in the SART. To achieve these objectives, 25 Participants completed four modified versions of the SART that differed from the classic paradigm in several ways so to capture more instances of mind-wandering. The duration that trials were presented for was increased proportionately across each of the four versions of the task; Standard, Medium Slow, Slow, and Very Slow and participants intermittently responded to thought probes assessing their level of focus and degree of mind-wandering throughout. Error rates, reaction times and variability in reaction times decreased in proportion to the decrease in trial duration rate and the proportion of mind-wandering related errors increased, until the Very Slow condition where the extra decrease in duration no longer had an effect. Distinct reaction time patterns around an error, dependent on level of focus (high/low) and level of mind-wandering (high/low) were also observed indicating four separate attention states occurring within the SART. This study establishes the optimal duration of trial presentation for inducing mind-wandering in the SART, provides evidence supporting the idea that different attention states can be observed within the SART and highlights the importance of addressing other factors contributing to behavioural responses when studying mind-wandering during this task. A notable finding in relation to the standard SART, was that while more errors were observed in this version of the task, most of these errors were during periods of focus, raising significant questions about our current understanding of mind-wandering and associated failures of attention.

Keywords: attention, mind-wandering, trial duration rate, Sustained Attention to Response Task (SART)

Procedia PDF Downloads 182
397 Testing the Impact of the Nature of Services Offered on Travel Sites and Links on Traffic Generated: A Longitudinal Survey

Authors: Rania S. Hussein

Abstract:

Background: This study aims to determine the evolution of service provision by Egyptian travel sites and how these services change in terms of their level of sophistication over the period of the study which is ten years. To the author’s best knowledge, this is the first longitudinal study that focuses on an extended time frame of ten years. Additionally, the study attempts to determine the popularity of these websites through the number of links to these sites. Links maybe viewed as the equivalent of a referral or word of mouth but in an online context. Both popularity and the nature of the services provided by these websites are used to determine the traffic on these sites. In examining the nature of services provided, the website itself is viewed as an overall service offering that is composed of different travel products and services. Method: This study uses content analysis in the form of a small scale survey done on 30 Egyptian travel agents’ websites to examine whether Egyptian travel websites are static or dynamic in terms of the services that they provide and whether they provide simple or sophisticated travel services. To determine the level of sophistication of these travel sites, the nature and composition of products and services offered by these sites were first examined. A framework adapted from Kotler (1997) 'Five levels of a product' was used. The target group for this study consists of companies that do inbound tourism. Four rounds of data collection were conducted over a period of 10 years. Two rounds of data collection were made in 2004 and two rounds were made in 2014. Data from the travel agents’ sites were collected over a two weeks period in each of the four rounds. Besides collecting data on features of websites, data was also collected on the popularity of these websites through a software program called Alexa that showed the traffic rank and number of links of each site. Regression analysis was used to test the effect of links and services on websites as independent variables on traffic as the dependent variable of this study. Findings: Results indicate that as companies moved from having simple websites with basic travel information to being more interactive, the number of visitors illustrated by traffic and the popularity of those sites increase as shown by the number of links. Results also show that travel companies use the web much more for promotion rather than for distribution since most travel agents are using it basically for information provision. The results of this content analysis study taps on an unexplored area and provide useful insights for marketers on how they can generate more traffic to their websites by focusing on developing a distinctive content on these sites and also by focusing on the visibility of their sites thus enhancing the popularity or links to their sites.

Keywords: levels of a product, popularity, travel, website evolution

Procedia PDF Downloads 320
396 Implementation of an Accessible State-Wide Trauma Education Program

Authors: Christine Lassen, Elizabeth Leonard, Matthew Oliver

Abstract:

The management of trauma is often complex and outcomes dependent on clinical expertise, effective teamwork, and a supported trauma system. The implementation of a statewide trauma education program should be accessible to all clinicians who manage trauma, but this can be challenging due to diverse individual needs, trauma service needs and geography. The NSW Institute of Trauma and Injury Management (ITIM) is a government funded body, responsible for coordinating and supporting the NSW Trauma System. The aim of this presentation is to describe how education initiatives have been implemented across the state. Simulation: In 2006, ITIM developed a Trauma Team Training Course - aimed to educate clinicians on the technical and non-technical skills required to manage trauma. The course is now independently coordinated by trauma services across the state at major trauma centres as well as in regional and rural hospitals. ITIM is currently in the process of re-evaluating and updating the Trauma Team Training Course to allow for the development of new resources and simulation scenarios. Trauma Education Evenings: In 2013, ITIM supported major trauma services to develop trauma education evenings which allowed the provision of free education to staff within the area health service and local area. The success of these local events expanded to regional hospitals. A total of 75 trauma education evenings have been conducted within NSW, with over 10,000 attendees. Wed-Based Resources: Recently, ITIM commenced free live streaming of the trauma education evenings which have now had over 3000 live views. The Trauma App developed in 2015 provides trauma clinicians with a centralised portal for trauma information and works on smartphones and tablets that integrate with the ITIM website. This supports pre-hospital and bedside clinical decisions and allows for trauma care to be more standardised, evidence-based, timely, and appropriate. Online e-Learning modules have been developed to assist clinicians, reduce unwarranted clinical variation and provide up to date evidence based education. The modules incorporate clinically focused education content with summative and formative assessments. Conclusion: Since 2005, ITIM has helped to facilitate the development of trauma education programs for doctors, nurses, pre-hospital and allied health clinicians. ITIM has been actively involved in more than 100 specialized trauma education programs, seminars and clinical workshops - attended by over 12,000 staff. The provision of state-wide trauma education is a challenging task requiring collaboration amongst numerous agencies working towards a common goal – to provide easily accessible trauma education.

Keywords: education, simulation, team-training, trauma

Procedia PDF Downloads 184
395 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing

Authors: Kedar Hardikar, Joe Varghese

Abstract:

Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applications

Keywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.

Procedia PDF Downloads 133
394 Effect of 8-OH-DPAT on the Behavioral Indicators of Stress and on the Number of Astrocytes after Exposure to Chronic Stress

Authors: Ivette Gonzalez-Rivera, Diana B. Paz-Trejo, Oscar Galicia-Castillo, David N. Velazquez-Martinez, Hugo Sanchez-Castillo

Abstract:

Prolonged exposure to stress can cause disorders related with dysfunction in the prefrontal cortex such as generalized anxiety and depression. These disorders involve alterations in neurotransmitter systems; the serotonergic system—a target of the drugs that are commonly used as a treatment to these disorders—is one of them. Recent studies suggest that 5-HT1A receptors play a pivotal role in the serotonergic system regulation and in stress responses. In the same way, there is increasing evidence that astrocytes are involved in the pathophysiology of stress. The aim of this study was to examine the effects of 8-OH-DPAT, a selective agonist of 5-HT1A receptors, in the behavioral signs of anxiety and anhedonia as well as in the number of astrocytes in the medial prefrontal cortex (mPFC) after exposure to chronic stress. They used 50 male Wistar rats of 250-350 grams housed in standard laboratory conditions and treated in accordance with the ethical standards of use and care of laboratory animals. A protocol of chronic unpredictable stress was used for 10 consecutive days during which the presentation of stressors such as motion restriction, water deprivation, wet bed, among others, were used. 40 rats were subjected to the stress protocol and then were divided into 4 groups of 10 rats each, which were administered 8-OH-DPAT (Tocris, USA) intraperitoneally with saline as vehicle in doses 0.0, 0.3, 1.0 and 2.0 mg/kg respectively. Another 10 rats were not subjected to the stress protocol or the drug. Subsequently, all the rats were measured in an open field test, a forced swimming test, sucrose consume, and a cero maze test. At the end of this procedure, the animals were sacrificed, the brain was removed and the tissue of the mPFC (Bregma: 4.20, 3.70, 2.70, 2.20) was processed in immunofluorescence staining for astrocytes (Anti-GFAP antibody - astrocyte maker, ABCAM). Statistically significant differences were found in the behavioral tests of all groups, showing that the stress group with saline administration had more indicators of anxiety and anhedonia than the control group and the groups with administration of 8-OH-DPAT. Also, a dose dependent effect of 8-OH-DPAT was found on the number of astrocytes in the mPFC. The results show that 8-OH-DPAT can modulate the effect of stress in both behavioral and anatomical level. Also they indicate that 5-HT1A receptors and astrocytes play an important role in the stress response and may modulate the therapeutic effect of serotonergic drugs, so they should be explored as a fundamental part in the treatment of symptoms of stress and in the understanding of the mechanisms of stress responses.

Keywords: anxiety, prefrontal cortex, serotonergic system, stress

Procedia PDF Downloads 323
393 Analyzing the Impact of Bariatric Surgery in Obesity Associated Chronic Kidney Disease: A 2-Year Observational Study

Authors: Daniela Magalhaes, Jorge Pedro, Pedro Souteiro, Joao S. Neves, Sofia Castro-Oliveira, Vanessa Guerreiro, Rita Bettencourt- Silva, Maria M. Costa, Ana Varela, Joana Queiros, Paula Freitas, Davide Carvalho

Abstract:

Introduction: Obesity is an independent risk factor for renal dysfunction. Our aims were: (1) evaluate the impact of bariatric surgery (BS) on renal function; (2) clarify the factors determining the postoperative evolution of the glomerular filtration rate (GFR); (3) access the occurrence of oxalate-mediated renal complications. Methods: We investigated a cohort of 1448 obese patients who underwent bariatric surgery. Those with basal GFR (GFR0) < 30mL/min or without information about the GFR 2-year post-surgery (GFR2) were excluded. Results: We included 725 patients, of whom 647 (89.2%) women, with 41 (IQR 34-51) years, a median weight of 112.4 (IQR 103.0-125.0) kg and a median BMI of 43.4 (IQR 40.6-46.9) kg/m2. Of these, 459 (63.3%) performed gastric bypass (RYGB), 144 (19.9%) placed an adjustable gastric band (AGB) and 122 (16.8%) underwent vertical gastrectomy (VG). At 2-year post-surgery, excess weight loss (EWL) was 60.1 (IQR 43.7-72.4) %. There was a significant improve of metabolic and inflammatory status, as well as a significant decrease in the proportion of patients with diabetes, arterial hypertension and dyslipidemia (p < 0.0001). At baseline, 38 (5.2%) of subjects had hyperfiltration with a GFR0 ≥ 125mL/min/1.73m2, 492 (67.9%) had a GFR0 90-124 mL/min/1.73m2, 178 (24.6%) had a GFR0 60-89 mL/min/1.73m2, and 17 (2.3%) had a GFR0 < 60 mL/min/1.73m2. GFR decreased in 63.2% of patients with hyperfiltration (ΔGFR=-2.5±7.6), and increased in 96.6% (ΔGFR=22.2±12.0) and 82.4% (ΔGFR=24.3±30.0) of the subjects with GFR0 60-89 and < 60 mL/min/1.73m2, respectively ( p < 0.0001). This trend was maintained when adjustment was made for the type of surgery performed. Of 321 patients, 10 (3.3%) had a urinary albumin excretion (UAE) > 300 mg/dL (A3), 44 (14.6%) had a UAE 30-300 mg/dL (A2) and 247 (82.1%) has a UAE < 30 mg/dL (A1). Albuminuria decreased after surgery and at 2-year follow-up only 1 (0.3%) patient had A3, 17 (5.6%) had A2 and 283 (94%) had A1 (p < 0,0001). In multivariate analysis, the variables independently associated with ΔGFR were BMI (positively) and fasting plasma glucose (negatively). During the 2-year follow-up, only 57 of the 725 patients had transient urinary excretion of calcium oxalate crystals. None has records of oxalate-mediated renal complications at our center. Conclusions: The evolution of GFR after BS seems to depend on the initial renal function, as it decreases in subjects with hyperfiltration, but tends to increase in those with renal dysfunction. Our results suggest that BS is associated with improvement of renal outcomes, without significant increase of renal complications. So, apart the clear benefits in metabolic and inflammatory status, maybe obese adults with nondialysis-dependent CKD should be referred for bariatric surgery evaluation.

Keywords: albuminuria, bariatric surgery, glomerular filtration rate, renal function

Procedia PDF Downloads 358
392 A Data-Driven Optimal Control Model for the Dynamics of Monkeypox in a Variable Population with a Comprehensive Cost-Effectiveness Analysis

Authors: Martins Onyekwelu Onuorah, Jnr Dahiru Usman

Abstract:

Introduction: In the realm of public health, the threat posed by Monkeypox continues to elicit concern, prompting rigorous studies to understand its dynamics and devise effective containment strategies. Particularly significant is its recurrence in variable populations, such as the observed outbreak in Nigeria in 2022. In light of this, our study undertakes a meticulous analysis, employing a data-driven approach to explore, validate, and propose optimized intervention strategies tailored to the distinct dynamics of Monkeypox within varying demographic structures. Utilizing a deterministic mathematical model, we delved into the intricate dynamics of Monkeypox, with a particular focus on a variable population context. Our qualitative analysis provided insights into the disease-free equilibrium, revealing its stability when R0 is less than one and discounting the possibility of backward bifurcation, as substantiated by the presence of a single stable endemic equilibrium. The model was rigorously validated using real-time data from the Nigerian 2022 recorded cases for Epi weeks 1 – 52. Transitioning from qualitative to quantitative, we augmented our deterministic model with optimal control, introducing three time-dependent interventions to scrutinize their efficacy and influence on the epidemic's trajectory. Numerical simulations unveiled a pronounced impact of the interventions, offering a data-supported blueprint for informed decision-making in containing the disease. A comprehensive cost-effectiveness analysis employing the Infection Averted Ratio (IAR), Average Cost-Effectiveness Ratio (ACER), and Incremental Cost-Effectiveness Ratio (ICER) facilitated a balanced evaluation of the interventions’ economic and health impacts. In essence, our study epitomizes a holistic approach to understanding and mitigating Monkeypox, intertwining rigorous mathematical modeling, empirical validation, and economic evaluation. The insights derived not only bolster our comprehension of Monkeypox's intricate dynamics but also unveil optimized, cost-effective interventions. This integration of methodologies and findings underscores a pivotal stride towards aligning public health imperatives with economic sustainability, marking a significant contribution to global efforts in combating infectious diseases.

Keywords: monkeypox, equilibrium states, stability, bifurcation, optimal control, cost-effectiveness

Procedia PDF Downloads 85
391 Differences in Patient Satisfaction Observed between Female Japanese Breast Cancer Patients Who Receive Breast-Conserving Surgery or Total Mastectomy

Authors: Keiko Yamauchi, Motoyuki Nakao, Yoko Ishihara

Abstract:

The increase in the number of women with breast cancer in Japan has required hospitals to provide a higher quality of medicine so that patients are satisfied with the treatment they receive. However, patients’ satisfaction following breast cancer treatment has not been sufficiently studied. Hence, we investigated the factors influencing patient satisfaction following breast cancer treatment among Japanese women. These women underwent either breast-conserving surgery (BCS) (n = 380) or total mastectomy (TM) (n = 247). In March 2016, we conducted a cross-sectional internet survey of Japanese women with breast cancer in Japan. We assessed the following factors: socioeconomic status, cancer-related information, the role of medical decision-making, the degree of satisfaction regarding the treatments received, and the regret arising from the medical decision-making processes. We performed logistic regression analyses with the following dependent variables: extreme satisfaction with the treatments received, and regret regarding the medical decision-making process. For both types of surgery, the odds ratio (OR) of being extremely satisfied with the cancer treatment was significantly higher among patients who did not have any regrets compared to patients who had. Also, the OR tended to be higher among patients who chose to play a wanted role in the medical decision-making process, compared with patients who did not. In the BCS group, the OR of being extremely satisfied with the treatment was higher if, at diagnosis, the patient’s youngest child was older than 19 years, compared with patients with no children. The OR was also higher if patient considered the stage and characteristics of their cancer significant. The OR of being extremely satisfied with the treatments was lower among patients who were not employed on full-time basis, and among patients who considered the second medical opinions and medical expenses to be significant. These associations were not observed in the TM group. The OR of having regrets regarding the medical decision-making process was higher among patients who chose to play a role in the decision-making process as they preferred, and was also higher in patients who were employed on either a part-time or contractual basis. For both types of surgery, the OR was higher among patients who considered a second medical opinion to be significant. Regardless of surgical type, regret regarding the medical decision-making process decreases treatment satisfaction. Patients who received breast-conserving surgery were more likely to have regrets concerning the medical decision-making process if they could not play a role in the process as they preferred. In addition, factors associated with the satisfaction with treatment in BCS group but not TM group included the second medical opinion, medical expenses, employment status, and age of the youngest child at diagnosis.

Keywords: medical decision making, breast-conserving surgery, total mastectomy, Japanese

Procedia PDF Downloads 146
390 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 236
389 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare

Authors: Piret Pernik

Abstract:

Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.

Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts

Procedia PDF Downloads 101
388 Impact of Microwave and Air Velocity on Drying Kinetics and Rehydration of Potato Slices

Authors: Caiyun Liu, A. Hernandez-Manas, N. Grimi, E. Vorobiev

Abstract:

Drying is one of the most used methods for food preservation, which extend shelf life of food and makes their transportation, storage and packaging easier and more economic. The commonly dried method is hot air drying. However, its disadvantages are low energy efficiency and long drying times. Because of the high temperature during the hot air drying, the undesirable changes in pigments, vitamins and flavoring agents occur which result in degradation of the quality parameters of the product. Drying process can also cause shrinkage, case hardening, dark color, browning, loss of nutrients and others. Recently, new processes were developed in order to avoid these problems. For example, the application of pulsed electric field provokes cell membrane permeabilisation, which increases the drying kinetics and moisture diffusion coefficient. Microwave drying technology has also several advantages over conventional hot air drying, such as higher drying rates and thermal efficiency, shorter drying time, significantly improved product quality and nutritional value. Rehydration kinetics of dried product is a very important characteristic of dried products. Current research has indicated that the rehydration ratio and the coefficient of rehydration are dependent on the processing conditions of drying. The present study compares the efficiency of two processes (1: room temperature air drying, 2: microwave/air drying) in terms of drying rate, product quality and rehydration ratio. In this work, potato slices (≈2.2g) with a thickness of 2 mm and diameter of 33mm were placed in the microwave chamber and dried. Drying kinetics and drying rates of different methods were determined. The process parameters included inlet air velocity (1 m/s, 1.5 m/s, 2 m/s) and microwave power (50 W, 100 W, 200 W and 250 W) were studied. The evolution of temperature during microwave drying was measured. The drying power had a strong effect on drying rate, and the microwave-air drying resulted in 93% decrease in the drying time when the air velocity was 2 m/s and the power of microwave was 250 W. Based on Lewis model, drying rate constants (kDR) were determined. It was observed an increase from kDR=0.0002 s-1 to kDR=0.0032 s-1 of air velocity of 2 m/s and microwave/air (at 2m/s and 250W) respectively. The effective moisture diffusivity was calculated by using Fick's law. The results show an increase of effective moisture diffusivity from 7.52×10-11 to 2.64×10-9 m2.s-1 for air velocity of 2 m/s and microwave/air (at 2m/s and 250W) respectively. The temperature of the potato slices increased for higher microwaves power, but decreased for higher air velocity. The rehydration ratio, defined as the weight of the the sample after rehydration per the weight of dried sample, was determined at different water temperatures (25℃, 50℃, 75℃). The rehydration ratio increased with the water temperature and reached its maximum at the following conditions: 200 W for the microwave power, 2 m/s for the air velocity and 75°C for the water temperature. The present study shows the interest of microwave drying for the food preservation.

Keywords: drying, microwave, potato, rehydration

Procedia PDF Downloads 267
387 Influence of Confinement on Phase Behavior in Unconventional Gas Condensate Reservoirs

Authors: Szymon Kuczynski

Abstract:

Poland is characterized by the presence of numerous sedimentary basins and hydrocarbon provinces. Since 2006 exploration for hydrocarbons in Poland become gradually more focus on new unconventional targets, particularly on the shale gas potential of the Upper Ordovician and Lower Silurian in the Baltic-Podlasie-Lublin Basin. The first forecast prepared by US Energy Information Administration in 2011 indicated to 5.3 Tcm of natural gas. In 2012, Polish Geological Institute presented its own forecast which estimated maximum reserves on 1.92 Tcm. The difference in the estimates was caused by problems with calculations of the initial amount of adsorbed, as well as free, gas trapped in shale rocks (GIIP - Gas Initially in Place). This value is dependent from sorption capacity, gas saturation and mutual interactions between gas, water, and rock. Determination of the reservoir type in the initial exploration phase brings essential knowledge, which has an impact on decisions related to the production. The study of porosity impact for phase envelope shift eliminates errors and improves production profitability. Confinement phenomenon affects flow characteristics, fluid properties, and phase equilibrium. The thermodynamic behavior of confined fluids in porous media is subject to the basic considerations for industrial applications such as hydrocarbons production. In particular the knowledge of the phase equilibrium and the critical properties of the contained fluid is essential for the design and optimization of such process. In pores with a small diameter (nanopores), the effect of the wall interaction with the fluid particles becomes significant and occurs in shale formations. Nano pore size is similar to the fluid particles’ diameter and the area of particles which flow without interaction with pore wall is almost equal to the area where this phenomenon occurs. The molecular simulation studies have shown an effect of confinement to the pseudo critical properties. Therefore, the critical parameters pressure and temperature and the flow characteristics of hydrocarbons in terms of nano-scale are under the strong influence of fluid particles with the pore wall. It can be concluded that the impact of a single pore size is crucial when it comes to the nanoscale because there is possible the above-described effect. Nano- porosity makes it difficult to predict the flow of reservoir fluid. Research are conducted to explain the mechanisms of fluid flow in the nanopores and gas extraction from porous media by desorption.

Keywords: adsorption, capillary condensation, phase envelope, nanopores, unconventional natural gas

Procedia PDF Downloads 336
386 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 123
385 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis

Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain

Abstract:

Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.

Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management

Procedia PDF Downloads 202
384 Analysis of Correlation Between Manufacturing Parameters and Mechanical Strength Followed by Uncertainty Propagation of Geometric Defects in Lattice Structures

Authors: Chetra Mang, Ahmadali Tahmasebimoradi, Xavier Lorang

Abstract:

Lattice structures are widely used in various applications, especially in aeronautic, aerospace, and medical applications because of their high performance properties. Thanks to advancement of the additive manufacturing technology, the lattice structures can be manufactured by different methods such as laser beam melting technology. However, the presence of geometric defects in the lattice structures is inevitable due to the manufacturing process. The geometric defects may have high impact on the mechanical strength of the structures. This work analyzes the correlation between the manufacturing parameters and the mechanical strengths of the lattice structures. To do that, two types of the lattice structures; body-centered cubic with z-struts (BCCZ) structures made of Inconel718, and body-centered cubic (BCC) structures made of Scalmalloy, are manufactured by laser melting beam machine using Taguchi design of experiment. Each structure is placed on the substrate with a specific position and orientation regarding the roller direction of deposed metal powder. The position and orientation are considered as the manufacturing parameters. The geometric defects of each beam in the lattice are characterized and used to build the geometric model in order to perform simulations. Then, the mechanical strengths are defined by the homogeneous response as Young's modulus and yield strength. The distribution of mechanical strengths is observed as a function of manufacturing parameters. The mechanical response of the BCCZ structure is stretch-dominated, i.e., the mechanical strengths are directly dependent on the strengths of the vertical beams. As the geometric defects of vertical beams are slightly changed based on their position/orientation on the manufacturing substrate, the mechanical strengths are less dispersed. The manufacturing parameters are less influenced on the mechanical strengths of the structure BCCZ. The mechanical response of the BCC structure is bending-dominated. The geometric defects of inclined beam are highly dispersed within a structure and also based on their position/orientation on the manufacturing substrate. For different position/orientation on the substrate, the mechanical responses are highly dispersed as well. This shows that the mechanical strengths are directly impacted by manufacturing parameters. In addition, this work is carried out to study the uncertainty propagation of the geometric defects on the mechanical strength of the BCC lattice structure made of Scalmalloy. To do that, we observe the distribution of mechanical strengths of the lattice according to the distribution of the geometric defects. A probability density law is determined based on a statistical hypothesis corresponding to the geometric defects of the inclined beams. The samples of inclined beams are then randomly drawn from the density law to build the lattice structure samples. The lattice samples are then used for simulation to characterize the mechanical strengths. The results reveal that the distribution of mechanical strengths of the structures with the same manufacturing parameters is less dispersed than one of the structures with different manufacturing parameters. Nevertheless, the dispersion of mechanical strengths due to the structures with the same manufacturing parameters are unneglectable.

Keywords: geometric defects, lattice structure, mechanical strength, uncertainty propagation

Procedia PDF Downloads 122
383 Impact of UV on Toxicity of Zn²⁺ and ZnO Nanoparticles to Lemna minor

Authors: Gabriela Kalcikova, Gregor Marolt, Anita Jemec Kokalj, Andreja Zgajnar Gotvajn

Abstract:

Since the 90’s, nanotechnology is one of the fastest growing fields of science. Nanomaterials are increasingly becoming part of many products and technologies. Metal oxide nanoparticles are among the most used nanomaterials. Zinc oxide nanoparticles (nZnO) is widely used due to its versatile properties; it has been used in products including plastics, paints, food, batteries, solar cells and cosmetic products. It is also a very effective photocatalyst used for water treatment. Such expanding application of nZnO increases their possible occurrence in the environment. In the aquatic ecosystem nZnO interact with natural environmental factors such as UV radiation, and thus it is essential to evaluate possible interaction between them. In this context, the aim of our study was to evaluate combined ecotoxicity of nZnO and Zn²⁺ on duckweed Lemna minor in presence or absence UV. Inhibition of vegetative growth of duckweed Lemna minor was monitored over a period of 7 days in multi-well plates. After the experiment, specific growth rate was determined. ZnO nanoparticles used were of primary size 13.6 ± 1.7 nm. The test was conducted with nominal nZnO and Zn²⁺ (in form of ZnCl₂) concentrations of 1, 10, 100 mg/L. Experiment was repeated with presence of natural intensity of UV (8h UV, 10 W/m² UVA, 0.5 W/m² UVB). Concentration of Zn during the test was determined by ICP-MS. In the regular experiment (absence of UV) the specific growth rate was slightly increased by low concentrations of nZnO and Zn²⁺ in comparison to control. However, 10 and 100 mg/L of Zn²⁺ resulted in 45% and 68% inhibition of the specific growth rate, respectively. In case of nZnO both concentrations (10 and 100 mg/L) resulted in similar ~ 30% inhibition and the response was not dose-dependent. The lack of the dose-response relationship is often observed in case of nanoparticles. The possible explanation is that the physical impact prevails instead of chemical ones. In the presence of UV the toxicity of Zn²⁺ was increased and 100 mg/L of Zn²⁺ caused total inhibition of the specific growth rate (100%). On the other hand, 100 mg/L of nZnO resulted in low inhibition (19%) in comparison to the experiment without UV (30%). It is thus expected, that tested nZnO is low photoactive, but could have a good UV absorption and/or reflective properties and thus protect duckweed against UV impacts. Measured concentration of Zn in the test suspension decreased only about 4% after 168h in the case of ZnCl₂. On the other hand concentration of Zn in nZnO test decreased by 80%. It is expected that nZnO were partially dissolved in the medium and at the same time agglomeration and sedimentation of particles took place and thus the concentration of Zn at the water level decreased. Results of our study indicated, that nZnO combined with UV of natural intensity does not increase toxicity of nZnO, but slightly protect the plant against UV negative effects. When Zn²⁺ and ZnO results are compared it seems that dissolved Zn plays a central role in the nZnO toxicity.

Keywords: duckweed, environmental factors, nanoparticles, toxicity

Procedia PDF Downloads 331
382 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator

Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib

Abstract:

Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.

Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model

Procedia PDF Downloads 305
381 Compression-Extrusion Test to Assess Texture of Thickened Liquids for Dysphagia

Authors: Jesus Salmeron, Carmen De Vega, Maria Soledad Vicente, Mireia Olabarria, Olaia Martinez

Abstract:

Dysphagia or difficulty in swallowing affects mostly elder people: 56-78% of the institutionalized and 44% of the hospitalized. Liquid food thickening is a necessary measure in this situation because it reduces the risk of penetration-aspiration. Until now, and as proposed by the American Dietetic Association in 2002, possible consistencies have been categorized in three groups attending to their viscosity: nectar (50-350 mPa•s), honey (350-1750 mPa•s) and pudding (>1750 mPa•s). The adequate viscosity level should be identified for every patient, according to her/his impairment. Nevertheless, a systematic review on dysphagia diet performed recently indicated that there is no evidence to suggest that there is any transition of clinical relevance between the three levels proposed. It was also stated that other physical properties of the bolus (slipperiness, density or cohesiveness, among others) could influence swallowing in affected patients and could contribute to the amount of remaining residue. Texture parameters need to be evaluated as possible alternative to viscosity. The aim of this study was to evaluate the instrumental extrusion-compression test as a possible tool to characterize changes along time in water thickened with various products and in the three theoretical consistencies. Six commercial thickeners were used: NM® (NM), Multi-thick® (M), Nutilis Powder® (Nut), Resource® (R), Thick&Easy® (TE) and Vegenat® (V). All of them with a modified starch base. Only one of them, Nut, also had a 6,4% of gum (guar, tara and xanthan). They were prepared as indicated in the instructions of each product and dispensing the correspondent amount for nectar, honey and pudding consistencies in 300 mL of tap water at 18ºC-20ºC. The mixture was stirred for about 30 s. Once it was homogeneously spread, it was dispensed in 30 mL plastic glasses; always to the same height. Each of these glasses was used as a measuring point. Viscosity was measured using a rotational viscometer (ST-2001, Selecta, Barcelona). Extrusion-compression test was performed using a TA.XT2i texture analyzer (Stable Micro Systems, UK) with a 25 mm diameter cylindrical probe (SMSP/25). Penetration distance was set at 10 mm and a speed of 3 mm/s. Measurements were made at 1, 5, 10, 20, 30, 40, 50 and 60 minutes from the moment samples were mixed. From the force (g)–time (s) curves obtained in the instrumental assays, maximum force peak (F) was chosen a reference parameter. Viscosity (mPa•s) and F (g) showed to be highly correlated and had similar development along time, following time-dependent quadratic models. It was possible to predict viscosity using F as an independent variable, as they were linearly correlated. In conclusion, compression-extrusion test could be an alternative and a useful tool to assess physical characteristics of thickened liquids.

Keywords: compression-extrusion test, dysphagia, texture analyzer, thickener

Procedia PDF Downloads 365
380 Systematic Identification of Noncoding Cancer Driver Somatic Mutations

Authors: Zohar Manber, Ran Elkon

Abstract:

Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).

Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements

Procedia PDF Downloads 102
379 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle

Authors: Aloke Bapli, Debabrata Seth

Abstract:

Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.

Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation

Procedia PDF Downloads 155
378 Characterization of Herberine Hydrochloride Nanoparticles

Authors: Bao-Fang Wen, Meng-Na Dai, Gao-Pei Zhu, Chen-Xi Zhang, Jing Sun, Xun-Bao Yin, Yu-Han Zhao, Hong-Wei Sun, Wei-Fen Zhang

Abstract:

A drug-loaded nanoparticles containing berberine hydrochloride (BH/FA-CTS-NPs) was prepared. The physicochemical characterizations of BH/FA-CTS-NPs and the inhibitory effect on the HeLa cells were investigated. Folic acid-conjugated chitosan (FA-CTS) was prepared by amino reaction of folic acid active ester and chitosan molecules; BH/FA-CTS-NPs were prepared using ionic cross-linking technique with BH as a model drug. The morphology and particle size were determined by Transmission Electron Microscope (TEM). The average diameters and polydispersity index (PDI) were evaluated by Dynamic Light Scattering (DLS). The interaction between various components and the nanocomplex were characterized by Fourier Transform Infrared Spectroscopy (FT-IR). The entrapment efficiency (EE), drug-loading (DL) and in vitro release were studied by UV spectrophotometer. The effect of cell anti-migratory and anti-invasive actions of BH/FA-CTS-NPs were investigated using MTT assays, wound healing assays, Annexin-V-FITC single staining assays, and flow cytometry, respectively. HeLa nude mice subcutaneously transplanted tumor model was established and treated with different drugs to observe the effect of BH/FA-CTS-NPs in vivo on HeLa bearing tumor. The BH/FA-CTS-NPs prepared in this experiment have a regular shape, uniform particle size, and no aggregation phenomenon. The results of DLS showed that mean particle size, PDI and Zeta potential of BH/FA-CTS NPs were (249.2 ± 3.6) nm, 0.129 ± 0.09, 33.6 ± 2.09, respectively, and the average diameter and PDI were stable in 90 days. The results of FT-IR demonstrated that the characteristic peaks of FA-CTS and BH/FA-CTS-NPs confirmed that FA-CTS cross-linked successfully and BH was encapsulated in NPs. The EE and DL amount were (79.3 ± 3.12) % and (7.24 ± 1.41) %, respectively. The results of in vitro release study indicated that the cumulative release of BH/FA-CTS NPs was (89.48±2.81) % in phosphate-buffered saline (PBS, pH 7.4) within 48h; these results by MTT assays and wund healing assays indicated that BH/FA-CTS NPs not only inhibited the proliferation of HeLa cells in a concentration and time-dependent manner but can induce apoptosis as well. The subcutaneous xenograft tumor formation rate of human cervical cancer cell line HeLa in nude mice was 98% after inoculation for 2 weeks. Compared with BH group and BH/CTS-NPs group, the xenograft tumor growth of BH/FA-CTS-NPs group was obviously slower; the result indicated that BH/FA-CTS-NPs could significantly inhibit the growth of HeLa xenograft tumor. BH/FA-CTS NPs with the sustained release effect could be prepared successfully by the ionic crosslinking method. Considering these properties, block proliferation and impairing the migration of the HeLa cell line, BH/FA-CTS NPs could be an important compound for consideration in the treatment of cervical cancer.

Keywords: folic-acid, chitosan, berberine hydrochloride, nanoparticles, cervical cancer

Procedia PDF Downloads 121