Search results for: Michael Levin
98 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality
Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard
Abstract:
Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus
Procedia PDF Downloads 57297 The Current Home Hemodialysis Practices and Patients’ Safety Related Factors: A Case Study from Germany
Authors: Ilyas Khan. Liliane Pintelon, Harry Martin, Michael Shömig
Abstract:
The increasing costs of healthcare on one hand, and the rise in aging population and associated chronic disease, on the other hand, are putting increasing burden on the current health care system in many Western countries. For instance, chronic kidney disease (CKD) is a common disease and in Europe, the cost of renal replacement therapy (RRT) is very significant to the total health care cost. However, the recent advancement in healthcare technology, provide the opportunity to treat patients at home in their own comfort. It is evident that home healthcare offers numerous advantages apparently, low costs and high patients’ quality of life. Despite these advantages, the intake of home hemodialysis (HHD) therapy is still low in particular in Germany. Many factors are accounted for the low number of HHD intake. However, this paper is focusing on patients’ safety-related factors of current HHD practices in Germany. The aim of this paper is to analyze the current HHD practices in Germany and to identify risks related factors if any exist. A case study has been conducted in a dialysis center which consists of four dialysis centers in the south of Germany. In total, these dialysis centers have 350 chronic dialysis patients, of which, four patients are on HHD. The centers have 126 staff which includes six nephrologists and 120 other staff i.e. nurses and administration. The results of the study revealed several risk-related factors. Most importantly, these centers do not offer allied health services at the pre-dialysis stage, the HHD training did not have an established curriculum; however, they have just recently developed the first version. Only a soft copy of the machine manual is offered to patients. Surprisingly, the management was not aware of any standard available for home assessment and installation. The home assessment is done by a third party (i.e. the machines and equipment provider) and they may not consider the hygienic quality of the patient’s home. The type of machine provided to patients at home is similar to the one in the center. The model may not be suitable at home because of its size and complexity. Even though portable hemodialysis machines, which are specially designed for home use, are available in the market such as the NxStage series. Besides the type of machine, no assistance is offered for space management at home in particular for placing the machine. Moreover, the centers do not offer remote assistance to patients and their carer at home. However, telephonic assistance is available. Furthermore, no alternative is offered if a carer is not available. In addition, the centers are lacking medical staff including nephrologists and renal nurses.Keywords: home hemodialysis, home hemodialysis practices, patients’ related risks in the current home hemodialysis practices, patient safety in home hemodialysis
Procedia PDF Downloads 11796 A Comparison of the Microbiology Profile for Periprosthetic Joint Infection (PJI) of Knee Arthroplasty and Lower Limb Endoprostheses in Tumour Surgery
Authors: Amirul Adlan, Robert A McCulloch, Neil Jenkins, MIchael Parry, Jonathan Stevenson, Lee Jeys
Abstract:
Background and Objectives: The current antibiotic prophylaxis for oncological patients is based upon evidence from primary arthroplasty despite significant differences in both patient group and procedure. The aim of this study was to compare the microbiology organisms responsible for PJI in patients who underwent two-stage revision for infected primary knee replacement with those of infected oncological endoprostheses of the lower limb in a single institution. This will subsequently guide decision making regarding antibiotic prophylaxis at primary implantation for oncological procedures and empirical antibiotics for infected revision procedures (where the infecting organism(s) are unknown). Patient and Methods: 118 patients were treated with two-stage revision surgery for infected knee arthroplasty and lower limb endoprostheses between 1999 and 2019. 74 patients had two-stage revision for PJI of knee arthroplasty, and 44 had two-stage revision of lower limb endoprostheses. There were 68 males and 50 females. The mean age for the knee arthroplasty cohort and lower limb endoprostheses cohort were 70.2 years (50-89) and 36.1 years (12-78), respectively (p<0.01). Patient host and extremity criteria were categorised according to the MSIS Host and Extremity Staging System. Patient microbiological culture, the incidence of polymicrobial infection and multi-drug resistance (MDR) were analysed and recorded. Results: Polymicrobial infection was reported in 16% (12 patients) from knee arthroplasty PJI and 14.5% (8 patients) in endoprostheses PJI (p=0.783). There was a significantly higher incidence of MDR in endoprostheses PJI, isolated in 36.4% of cultures, compared to knee arthroplasty PJI (17.2%) (p=0.01). Gram-positive organisms were isolated in more than 80% of cultures from both cohorts. Coagulase-negative Staphylococcus (CoNS) was the commonest gram-positive organism, and Escherichia coli was the commonest Gram-negative organism in both groups. According to the MSIS staging system, the host and extremity grade of knee arthroplasty PJI cohort were significantly better than endoprostheses PJI(p<0.05). Conclusion: Empirical antibiotic management of PJI in orthopaedic oncology is based upon PJI in arthroplasty despite differences in both host and microbiology. Our results show a significant increase in MDR pathogens within the oncological group despite CoNS being the most common infective organism in both groups. Endoprosthetic patients presented with poorer host and extremity criteria. These factors should be considered when managing this complex patient group, emphasising the importance of broad-spectrum antibiotic prophylaxis and preoperative sampling to ensure appropriate perioperative antibiotic cover.Keywords: microbiology, periprosthetic Joint infection, knee arthroplasty, endoprostheses
Procedia PDF Downloads 11495 Revealing the Nitrogen Reaction Pathway for the Catalytic Oxidative Denitrification of Fuels
Authors: Michael Huber, Maximilian J. Poller, Jens Tochtermann, Wolfgang Korth, Andreas Jess, Jakob Albert
Abstract:
Aside from the desulfurisation, the denitrogenation of fuels is of great importance to minimize the environmental impact of transport emissions. The oxidative reaction pathway of organic nitrogen in the catalytic oxidative denitrogenation could be successfully elucidated. This is the first time such a pathway could be traced in detail in non-microbial systems. It was found that the organic nitrogen is first oxidized to nitrate, which is subsequently reduced to molecular nitrogen via nitrous oxide. Hereby, the organic substrate serves as a reducing agent. The discovery of this pathway is an important milestone for the further development of fuel denitrogenation technologies. The United Nations aims to counteract global warming with Net Zero Emissions (NZE) commitments; however, it is not yet foreseeable when crude oil-based fuels will become obsolete. In 2021, more than 50 million barrels per day (mb/d) were consumed for the transport sector alone. Above all, heteroatoms such as sulfur or nitrogen produce SO₂ and NOx during combustion in the engines, which is not only harmful to the climate but also to health. Therefore, in refineries, these heteroatoms are removed by hy-drotreating to produce clean fuels. However, this catalytic reaction is inhibited by the basic, nitrogenous reactants (e.g., quinoline) as well as by NH3. The ion pair of the nitrogen atom forms strong pi-bonds to the active sites of the hydrotreating catalyst, which dimin-ishes its activity. To maximize the desulfurization and denitrogenation effectiveness in comparison to just extraction and adsorption, selective oxidation is typically combined with either extraction or selective adsorption. The selective oxidation produces more polar compounds that can be removed from the non-polar oil in a separate step. The extraction step can also be carried out in parallel to the oxidation reaction, as a result of in situ separation of the oxidation products (ECODS; extractive catalytic oxidative desulfurization). In this process, H8PV5Mo7O40 (HPA-5) is employed as a homogeneous polyoxometalate (POM) catalyst in an aqueous phase, whereas the sulfur containing fuel components are oxidized after diffusion from the organic fuel phase into the aqueous catalyst phase, to form highly polar products such as H₂SO₄ and carboxylic acids, which are thereby extracted from the organic fuel phase and accumulate in the aqueous phase. In contrast to the inhibiting properties of the basic nitrogen compounds in hydrotreating, the oxidative desulfurization improves with simultaneous denitrification in this system (ECODN; extractive catalytic oxidative denitrogenation). The reaction pathway of ECODS has already been well studied. In contrast, the oxidation of nitrogen compounds in ECODN is not yet well understood and requires more detailed investigations.Keywords: oxidative reaction pathway, denitrogenation of fuels, molecular catalysis, polyoxometalate
Procedia PDF Downloads 17894 Analyzing the Performance of the Philippine Disaster Risk Reduction and Management Act of 2010 as Framework for Managing and Recovering from Large-Scale Disasters: A Typhoon Haiyan Recovery Case Study
Authors: Fouad M. Bendimerad, Jerome B. Zayas, Michael Adrian T. Padilla
Abstract:
With the increasing scale of severity and frequency of disasters worldwide, the performance of governance systems for disaster risk reduction and management in many countries are being put to the test. In the Philippines, the Disaster Risk Reduction and Management (DRRM) Act of 2010 (Republic Act 10121 or RA 10121) as the framework for disaster risk reduction and management was tested when Super Typhoon Haiyan hit the eastern provinces of the Philippines in November 2013. Typhoon Haiyan is considered to be the strongest recorded typhoon in history to make landfall with winds exceeding 252 km/hr. In assessing the performance of RA 10121 the authors conducted document reviews of related policies, plans, programs, and key interviews and focus groups with representatives of 21 national government departments, two (2) local government units, six (6) private sector and civil society organizations, and five (5) development agencies. Our analysis will argue that enhancements are needed in RA 10121 in order to meet the challenges of large-scale disasters. The current structure where government agencies and departments organize along DRRM thematic areas such response and relief, preparedness, prevention and mitigation, and recovery and response proved to be inefficient in coordinating response and recovery and in mobilizing resources on the ground. However, experience from various disasters has shown the Philippine government’s tendency to organize major recovery programs along development sectors such as infrastructure, livelihood, shelter, and social services, which is consistent with the concept of DRM mainstreaming. We will argue that this sectoral approach is more effective than the thematic approach to DRRM. The council-type arrangement for coordination has also been rendered inoperable by Typhoon Haiyan because the agency responsible for coordination does not have decision-making authority to mobilize action and resources of other agencies which are members of the council. Resources have been devolved to agencies responsible for each thematic area and there is no clear command and direction structure for decision-making. However, experience also shows that the Philippine government has appointed ad-hoc bodies with authority over other agencies to coordinate and mobilize action and resources in recovering from large-scale disasters. We will argue that this approach be institutionalized within the government structure to enable a more efficient and effective disaster risk reduction and management system.Keywords: risk reduction and management, recovery, governance, typhoon haiyan response and recovery
Procedia PDF Downloads 28693 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale
Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal
Abstract:
Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable
Procedia PDF Downloads 29992 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 10891 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners
Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas
Abstract:
Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy
Procedia PDF Downloads 23790 Adding a Degree of Freedom to Opinion Dynamics Models
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 11889 Optimizing Weight Loss with AI (GenAISᵀᴹ): A Randomized Trial of Dietary Supplement Prescriptions in Obese Patients
Authors: Evgeny Pokushalov, Andrey Ponomarenko, John Smith, Michael Johnson, Claire Garcia, Inessa Pak, Evgenya Shrainer, Dmitry Kudlay, Sevda Bayramova, Richard Miller
Abstract:
Background: Obesity is a complex, multifactorial chronic disease that poses significant health risks. Recent advancements in artificial intelligence (AI) offer the potential for more personalized and effective dietary supplement (DS) regimens to promote weight loss. This study aimed to evaluate the efficacy of AI-guided DS prescriptions compared to standard physician-guided DS prescriptions in obese patients. Methods: This randomized, parallel-group pilot study enrolled 60 individuals aged 40 to 60 years with a body mass index (BMI) of 25 or greater. Participants were randomized to receive either AI-guided DS prescriptions (n = 30) or physician-guided DS prescriptions (n = 30) for 180 days. The primary endpoints were the percentage change in body weight and the proportion of participants achieving a ≥5% weight reduction. Secondary endpoints included changes in BMI, fat mass, visceral fat rating, systolic and diastolic blood pressure, lipid profiles, fasting plasma glucose, hsCRP levels, and postprandial appetite ratings. Adverse events were monitored throughout the study. Results: Both groups were well balanced in terms of baseline characteristics. Significant weight loss was observed in the AI-guided group, with a mean reduction of -12.3% (95% CI: -13.1 to -11.5%) compared to -7.2% (95% CI: -8.1 to -6.3%) in the physician-guided group, resulting in a treatment difference of -5.1% (95% CI: -6.4 to -3.8%; p < 0.01). At day 180, 84.7% of the AI-guided group achieved a weight reduction of ≥5%, compared to 54.5% in the physician-guided group (Odds Ratio: 4.3; 95% CI: 3.1 to 5.9; p < 0.01). Significant improvements were also observed in BMI, fat mass, and visceral fat rating in the AI-guided group (p < 0.01 for all). Postprandial appetite suppression was greater in the AI-guided group, with significant reductions in hunger and prospective food consumption, and increases in fullness and satiety (p < 0.01 for all). Adverse events were generally mild-to-moderate, with higher incidences of gastrointestinal symptoms in the AI-guided group, but these were manageable and did not impact adherence. Conclusion: The AI-guided dietary supplement regimen was more effective in promoting weight loss, improving body composition, and suppressing appetite compared to the physician-guided regimen. These findings suggest that AI-guided, personalized supplement prescriptions could offer a more effective approach to managing obesity. Further research with larger sample sizes is warranted to confirm these results and optimize AI-based interventions for weight loss.Keywords: obesity, AI-guided, dietary supplements, weight loss, personalized medicine, metabolic health, appetite suppression
Procedia PDF Downloads 488 Regional Analysis of Freight Movement by Vehicle Classification
Authors: Katerina Koliou, Scott Parr, Evangelos Kaisar
Abstract:
The surface transportation of freight is particularly vulnerable to storm and hurricane disasters, while at the same time, it is the primary transportation mode for delivering medical supplies, fuel, water, and other essential goods. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The research investigation used Florida's statewide continuous-count station traffic volumes, where then compared between years, to identify locations where traffic was moving differently during the evacuation. The data was then used to identify days on which traffic was significantly different between years. While the literature on auto-based evacuations is extensive, the consideration of freight travel is lacking. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The goal of this research was to investigate the movement of vehicles by classification, with an emphasis on freight during two major evacuation events: hurricanes Irma (2017) and Michael (2018). The methodology of the research was divided into three phases: data collection and management, spatial analysis, and temporal comparisons. Data collection and management obtained continuous-co station data from the state of Florida for both 2017 and 2018 by vehicle classification. The data was then processed into a manageable format. The second phase used geographic information systems (GIS) to display where and when traffic varied across the state. The third and final phase was a quantitative investigation into which vehicle classifications were statistically different and on which dates statewide. This phase used a two-sample, two-tailed t-test to compare sensor volume by classification on similar days between years. Overall, increases in freight movement between years prevented a more precise paired analysis. This research sought to identify where and when different classes of vehicles were traveling leading up to hurricane landfall and post-storm reentry. Of the more significant findings, the research results showed that commercial-use vehicles may have underutilized rest areas during the evacuation, or perhaps these rest areas were closed. This may suggest that truckers are driving longer distances and possibly longer hours before hurricanes. Another significant finding of this research was that changes in traffic patterns for commercial-use vehicles occurred earlier and lasted longer than changes for personal-use vehicles. This finding suggests that commercial vehicles are perhaps evacuating in a fashion different from personal use vehicles. This paper may serve as the foundation for future research into commercial travel during evacuations and explore additional factors that may influence freight movements during evacuations.Keywords: evacuation, freight, travel time, evacuation
Procedia PDF Downloads 6587 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27
Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer
Abstract:
At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis
Procedia PDF Downloads 18586 Impact of Helicobacter pylori Infection on Colorectal Adenoma-Colorectal Carcinoma Sequence
Authors: Jannis Kountouras, Nikolaos Kapetanakis, Stergios A. Polyzos, Apostolis Papaeftymiou, Panagiotis Katsinelos, Ioannis Venizelos, Christina Nikolaidou, Christos Zavos, Iordanis Romiopoulos, Elena Tsiaousi, Evangelos Kazakos, Michael Doulberis
Abstract:
Background & Aims: Helicobacter pylori infection (Hp-I) has been recognized as a substantial risk agent involved in gastrointestinal (GI) tract oncogenesis by stimulating cancer stem cells (CSCs), oncogenes, immune surveillance processes, and triggering GI microbiota dysbiosis. We aimed to investigate the possible involvement of active Hp-I in the sequence: chronic inflammation–adenoma–colorectal cancer (CRC) development. Methods: Four pillars were investigated: (i) endoscopic and conventional histological examinations of patients with CRC, colorectal adenomas (CRA) versus controls to detect the presence of active Hp-I; (ii) immunohistochemical determination of the presence of Hp; expression of CD44, an indicator of CSCs and/or bone marrow-derived stem cells (BMDSCs); expressions of oncogene Ki67 and anti-apoptotic Bcl-2 protein; (iii) expression of CD45, indicator of immune surveillance locally (assessing mainly T and B lymphocytes locally); and (iv) correlation of the studied parameters with the presence or absence of Hp-I. Results: Among 50 patients with CRC, 25 with CRA, and 10 controls, a significantly higher presence of Hp-I in the CRA (68%) and CRC group (84%) were found compared with controls (30%). The presence of Hp-I with accompanying immunohistochemical expression of CD44 in biopsy specimens was revealed in a high proportion of patients with CRA associated with moderate/severe dysplasia (88%) and CRC patients with moderate/severe degree of malignancy (91%). Comparable results were also obtained for Ki67, Bcl-2, and CD45 immunohistochemical expressions. Concluding Remarks: Hp-I seems to be involved in the sequence: CRA – dysplasia – CRC, similarly to the upper GI tract oncogenesis, by several pathways such as the following: Beyond Hp-I associated insulin resistance, the major underlying mechanism responsible for the metabolic syndrome (MetS) that increase the risk of colorectal neoplasms, as implied by other Hp-I related MetS pathologies, such as non-alcoholic fatty liver disease and upper GI cancer, the disturbance of the normal GI microbiota (i.e., dysbiosis) and the formation of an irritative biofilm could contribute to a perpetual inflammatory upper GIT and colon mucosal damage, stimulating CSCs or recruiting BMDSCs and affecting oncogenes and immune surveillance processes. Further large-scale relative studies with a pathophysiological perspective are necessary to demonstrate in-depth this relationship.Keywords: Helicobacter pylori, colorectal cancer, colorectal adenomas, gastrointestinal oncogenesis
Procedia PDF Downloads 14585 Validating the Micro-Dynamic Rule in Opinion Dynamics Models
Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule
Procedia PDF Downloads 15984 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)
Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg
Abstract:
One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.Keywords: arsenic, fluoride, groundwater contamination, logistic regression
Procedia PDF Downloads 34683 Design Development and Qualification of a Magnetically Levitated Blower for C0₂ Scrubbing in Manned Space Missions
Authors: Larry Hawkins, Scott K. Sakakura, Michael J. Salopek
Abstract:
The Marshall Space Flight Center is designing and building a next-generation CO₂ removal system, the Four Bed Carbon Dioxide Scrubber (4BCO₂), which will use the International Space Station (ISS) as a testbed. The current ISS CO2 removal system has faced many challenges in both performance and reliability. Given that CO2 removal is an integral Environmental Control and Life Support System (ECLSS) subsystem, the 4BCO2 Scrubber has been designed to eliminate the shortfalls identified in the current ISS system. One of the key required upgrades was to improve the performance and reliability of the blower that provides the airflow through the CO₂ sorbent beds. A magnetically levitated blower, capable of higher airflow and pressure than the previous system, was developed to meet this need. The design and qualification testing of this next-generation blower are described here. The new blower features a high-efficiency permanent magnet motor, a five-axis, active magnetic bearing system, and a compact controller containing both a variable speed drive and a magnetic bearing controller. The blower uses a centrifugal impeller to pull air from the inlet port and drive it through an annular space around the motor and magnetic bearing components to the exhaust port. Technical challenges of the blower and controller development include survival of the blower system under launch random vibration loads, operation in microgravity, packaging under strict size and weight requirements, and successful operation during 4BCO₂ operational changeovers. An ANSYS structural dynamic model of the controller was used to predict response to the NASA defined random vibration spectrum and drive minor design changes. The simulation results are compared to measurements from qualification testing the controller on a vibration table. Predicted blower performance is compared to flow loop testing measurements. Dynamic response of the system to valve changeovers is presented and discussed using high bandwidth measurements from dynamic pressure probes, magnetic bearing position sensors, and actuator coil currents. The results presented in the paper show that the blower controller will survive launch vibration levels, the blower flow meets the requirements, and the magnetic bearings have adequate load capacity and control bandwidth to maintain the desired rotor position during the valve changeover transients.Keywords: blower, carbon dioxide removal, environmental control and life support system, magnetic bearing, permanent magnet motor, validation testing, vibration
Procedia PDF Downloads 13382 Synthesis of Functionalized-2-Aryl-2, 3-Dihydroquinoline-4(1H)-Ones via Fries Rearrangement of Azetidin-2-Ones
Authors: Parvesh Singh, Vipan Kumar, Vishu Mehra
Abstract:
Quinoline-4-ones represent an important class of heterocyclic scaffolds that have attracted significant interest due to their various biological and pharmacological activities. This heterocyclic unit also constitutes an integral component in drugs used for the treatment of neurodegenerative diseases, sleep disorders and in antibiotics viz. norfloxacin and ciprofloxacin. The synthetic accessibility and possibility of fictionalization at varied positions in quinoline-4-ones exemplifies an elegant platform for the designing of combinatorial libraries of functionally enriched scaffolds with a range of pharmacological profles. They are also considered to be attractive precursors for the synthesis of medicinally imperative molecules such as non-steroidal androgen receptor antagonists, antimalarial drug Chloroquine and martinellines with antibacterial activity. 2-Aryl-2,3-dihydroquinolin-4(1H)-ones are present in many natural and non-natural compounds and are considered to be the aza-analogs of favanones. The β-lactam class of antibiotics is generally recognized to be a cornerstone of human health care due to the unparalleled clinical efficacy and safety of this type of antibacterial compound. In addition to their biological relevance as potential antibiotics, β-lactams have also acquired a prominent place in organic chemistry as synthons and provide highly efficient routes to a variety of non-protein amino acids, such as oligopeptides, peptidomimetics, nitrogen-heterocycles, as well as biologically active natural and unnatural products of medicinal interest such as indolizidine alkaloids, paclitaxel, docetaxel, taxoids, cyptophycins, lankacidins, etc. A straight forward route toward the synthesis of quinoline-4-ones via the triflic acid assisted Fries rearrangement of N-aryl-βlactams has been reported by Tepe and co-workers. The ring expansion observed in this case was solely attributed to the inherent ring strain in β-lactam ring because -lactam failed to undergo rearrangement under reaction conditions. Theabovementioned protocol has been recently extended by our group for the synthesis of benzo[b]-azocinon-6-ones via a tandem Michael addition–Fries rearrangement of sorbyl anilides as well as for the single-pot synthesis of 2-aryl-quinolin-4(3H)-ones through the Fries rearrangement of 3-dienyl-βlactams. In continuation with our synthetic endeavours with the β-lactam ring and in view of the lack of convenient approaches for the synthesis of C-3 functionalized quinolin-4(1H)-ones, the present work describes the single-pot synthesis of C-3 functionalized quinolin-4(1H)-ones via the trific acid promoted Fries rearrangement of C-3 vinyl/isopropenyl substituted β-lactams. In addition, DFT calculations and MD simulations were performed to investigate the stability profles of synthetic compounds.Keywords: dihydroquinoline, fries rearrangement, azetidin-2-ones, quinoline-4-ones
Procedia PDF Downloads 24981 Psychological Predictors in Performance: An Exploratory Study of a Virtual Ultra-Marathon
Authors: Michael McTighe
Abstract:
Background: The COVID-19 pandemic caused the cancellation of many large-scale in-person sporting events, which led to an increase in the availability of virtual ultra-marathons. This study intended to assess how participation in virtual long distances races relates to levels of physical activity for an extended period of time. Moreover, traditional ultra-marathons are known for being not only physically demanding, but also mentally and emotionally challenging. A second component of this study was to assess how psychological contructs related to emotion regulation and mental toughness predict overall performance in the sport. Method: 83 virtual runners participating in a four-month 1000-kilometer race with the option to exceed 1000 kilometers completed a questionnaire exploring demographics, their performance, and experience in the virtual race. Participants also completed the Difficulties in Emotions Regulation Scale (DERS) and the Sports Mental Toughness Questionnaire (SMTQ). Logistics regressions assessed these constructs’ utility in predicting completion of the 1000-kilometer distance in the time allotted. Multiple regression was employed to predict the total distance traversed during the fourmonth race beyond 1000-kilometers. Result: Neither mental toughness nor emotional regulation was a significant predictor of completing the virtual race’s basic 1000-kilometer finish. However, both variables included together were marginally significant predictors of total miles traversed over the entire event beyond 1000 K (p = .051). Additionally, participation in the event promoted an increase in healthy activity with participants running and walking significantly more in the four months during the event than the four months leading up to it. Discussion: This research intended to explore how psychological constructs relate to performance in a virtual type of endurance event, and how involvement in these types of events related to levels of activity. Higher levels of mental toughness and lower levels in difficulties in emotion regulation were associated with greater performance, and participation in the event promoted an increase in athletic involvement. Future psychological skill training aimed at improving emotion regulation and mental toughness may be used to enhance athletic performance in these sports, and future investigations into these events could explore how general participation may influence these constructs over time. Finally, these results suggest that participation in this logistically accessible, and affordable type of sport can promote greater involvement in healthy activities related to running and walking.Keywords: virtual races, emotion regulation, mental toughness, ultra-marathon, predictors in performance
Procedia PDF Downloads 9480 Rural Entrepreneurship as a Response to Climate Change and Resource Conservation
Authors: Omar Romero-Hernandez, Federico Castillo, Armando Sanchez, Sergio Romero, Andrea Romero, Michael Mitchell
Abstract:
Environmental policies for resource conservation in rural areas include subsidies on services and social programs to cover living expenses. Government's expectation is that rural communities who benefit from social programs, such as payment for ecosystem services, are provided with an incentive to conserve natural resources and preserve natural sinks for greenhouse gases. At the same time, global climate change has affected the lives of people worldwide. The capability to adapt to global warming depends on the available resources and the standard of living, putting rural communities at a disadvantage. This paper explores whether rural entrepreneurship can represent a solution to resource conservation and global warming adaptation in rural communities. The research focuses on a sample of two coffee communities in Oaxaca, Mexico. Researchers used geospatial information contained in aerial photographs of the geographical areas of interest. Households were identified in the photos via the roofs of households and georeferenced via coordinates. From the household population, a random selection of roofs was performed and received a visit. A total of 112 surveys were completed, including questions of socio-demographics, perception to climate change and adaptation activities. The population includes two groups of study: entrepreneurs and non-entrepreneurs. Data was sorted, filtered, and validated. Analysis includes descriptive statistics for exploratory purposes and a multi-regression analysis. Outcomes from the surveys indicate that coffee farmers, who demonstrate entrepreneurship skills and hire employees, are more eager to adapt to climate change despite the extreme adverse socioeconomic conditions of the region. We show that farmers with entrepreneurial tendencies are more creative in using innovative farm practices such as the planting of shade trees, the use of live fencing, instead of wires, and watershed protection techniques, among others. This result counters the notion that small farmers are at the mercy of climate change and have no possibility of being able to adapt to a changing climate. The study also points to roadblocks that farmers face when coping with climate change. Among those roadblocks are a lack of extension services, access to credit, and reliable internet, all of which reduces access to vital information needed in today’s constantly changing world. Results indicate that, under some circumstances, funding and supporting entrepreneurship programs may provide more benefit than traditional social programs.Keywords: entrepreneurship, global warming, rural communities, climate change adaptation
Procedia PDF Downloads 23979 Comparison of Quality of Life One Year after Bariatric Intervention: Systematic Review of the Literature with Bayesian Network Meta-Analysis
Authors: Piotr Tylec, Alicja Dudek, Grzegorz Torbicz, Magdalena Mizera, Natalia Gajewska, Michael Su, Tanawat Vongsurbchart, Tomasz Stefura, Magdalena Pisarska, Mateusz Rubinkiewicz, Piotr Malczak, Piotr Major, Michal Pedziwiatr
Abstract:
Introduction: Quality of life after bariatric surgery is an important factor when evaluating the final result of the treatment. Considering the vast surgical options, we tried to globally compare available methods in terms of quality of following the surgery. The aim of the study is to compare the quality of life a year after bariatric intervention using network meta-analysis methods. Material and Methods: We performed a systematic review according to PRISMA guidelines with Bayesian network meta-analysis. Inclusion criteria were: studies comparing at least two methods of weight loss treatment of which at least one is surgical, assessment of the quality of life one year after surgery by validated questionnaires. Primary outcomes were quality of life one year after bariatric procedure. The following aspects of quality of life were analyzed: physical, emotional, general health, vitality, role physical, social, mental, and bodily pain. All questionnaires were standardized and pooled to a single scale. Lifestyle intervention was considered as a referenced point. Results: An initial reference search yielded 5636 articles. 18 studies were evaluated. In comparison of total score of quality of life, we observed that laparoscopic sleeve gastrectomy (LSG) (median (M): 3.606, Credible Interval 97.5% (CrI): 1.039; 6.191), laparoscopic Roux en-Y gastric by-pass (LRYGB) (M: 4.973, CrI: 2.627; 7.317) and open Roux en-Y gastric by-pass (RYGB) (M: 9.735, CrI: 6.708; 12.760) had better results than other bariatric intervention in relation to lifestyle interventions. In the analysis of the physical aspects of quality of life, we notice better results in LSG (M: 3.348, CrI: 0.548; 6.147) and in LRYGB procedure (M: 5.070, CrI: 2.896; 7.208) than control intervention, and worst results in open RYGB (M: -9.212, CrI: -11.610; -6.844). Analyzing emotional aspects, we found better results than control intervention in LSG, in LRYGB, in open RYGB, and laparoscopic gastric plication. In general health better results were in LSG (M: 9.144, CrI: 4.704; 13.470), in LRYGB (M: 6.451, CrI: 10.240; 13.830) and in single-anastomosis gastric by-pass (M: 8.671, CrI: 1.986; 15.310), and worst results in open RYGB (M: -4.048, CrI: -7.984; -0.305). In social and vital aspects of quality of life, better results were observed in LSG and LRYGB than control intervention. We did not find any differences between bariatric interventions in physical role, mental and bodily aspects of quality of life. Conclusion: The network meta-analysis revealed that better quality of life in total score one year after bariatric interventions were after LSG, LRYGB, open RYGB. In physical and general health aspects worst quality of life was in open RYGB procedure. Other interventions did not significantly affect the quality of life after a year compared to dietary intervention.Keywords: bariatric surgery, network meta-analysis, quality of life, one year follow-up
Procedia PDF Downloads 15778 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management
Authors: Michael McCann
Abstract:
Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.Keywords: corporate governance, boards of directors, agency theory, earnings management
Procedia PDF Downloads 23377 Inconsistent Effects of Landscape Heterogeneity on Animal Diversity in an Agricultural Mosaic: A Multi-Scale and Multi-Taxon Investigation
Authors: Chevonne Reynolds, Robert J. Fletcher, Jr, Celine M. Carneiro, Nicole Jennings, Alison Ke, Michael C. LaScaleia, Mbhekeni B. Lukhele, Mnqobi L. Mamba, Muzi D. Sibiya, James D. Austin, Cebisile N. Magagula, Themba’alilahlwa Mahlaba, Ara Monadjem, Samantha M. Wisely, Robert A. McCleery
Abstract:
A key challenge for the developing world is reconciling biodiversity conservation with the growing demand for food. In these regions, agriculture is typically interspersed among other land-uses creating heterogeneous landscapes. A primary hypothesis for promoting biodiversity in agricultural landscapes is the habitat heterogeneity hypothesis. While there is evidence that landscape heterogeneity positively influences biodiversity, the application of this hypothesis is hindered by a need to determine which components of landscape heterogeneity drive these effects and at what spatial scale(s). Additionally, whether diverse taxonomic groups are similarly affected is central for determining the applicability of this hypothesis as a general conservation strategy in agricultural mosaics. Two major components of landscape heterogeneity are compositional and configurational heterogeneity. Disentangling the roles of each component is important for biodiversity conservation because each represents different mechanisms underpinning variation in biodiversity. We identified a priori independent gradients of compositional and configurational landscape heterogeneity within an extensive agricultural mosaic in north-eastern Swaziland. We then tested how bird, dung beetle, ant and meso-carnivore diversity responded to compositional and configurational heterogeneity across six different spatial scales. To determine if a general trend could be observed across multiple taxa, we also tested which component and spatial scale was most influential across all taxonomic groups combined, Compositional, not configurational, heterogeneity explained diversity in each taxonomic group, with the exception of meso-carnivores. Bird and ant diversity was positively correlated with compositional heterogeneity at fine spatial scales < 1000 m, whilst dung beetle diversity was negatively correlated to compositional heterogeneity at broader spatial scales > 1500 m. Importantly, because of these contrasting effects across taxa, there was no effect of either component of heterogeneity on the combined taxonomic diversity at any spatial scale. The contrasting responses across taxonomic groups exemplify the difficulty in implementing effective conservation strategies that meet the requirements of diverse taxa. To promote diverse communities across a range of taxa, conservation strategies must be multi-scaled and may involve different strategies at varying scales to offset the contrasting influences of compositional heterogeneity. A diversity of strategies are likely key to conserving biodiversity in agricultural mosaics, and we have demonstrated that a landscape management strategy that only manages for heterogeneity at one particular scale will likely fall short of management objectives.Keywords: agriculture, biodiversity, composition, configuration, heterogeneity
Procedia PDF Downloads 26076 Determining the Threshold for Protective Effects of Aerobic Exercise on Aortic Structure in a Mouse Model of Marfan Syndrome Associated Aortic Aneurysm
Authors: Christine P. Gibson, Ramona Alex, Michael Farney, Johana Vallejo-Elias, Mitra Esfandiarei
Abstract:
Aortic aneurysm is the leading cause of death in Marfan syndrome (MFS), a connective tissue disorder caused by mutations in fibrillin-1 gene (FBN1). MFS aneurysm is characterized by weakening of the aortic wall due to elastin fibers fragmentation and disorganization. The above-average height and distinct physical features make young adults with MFS desirable candidates for competitive sports; but little is known about the exercise limit at which they will be at risk for aortic rupture. On the other hand, aerobic cardiovascular exercise has been shown to have protective effects on the heart and aorta. We have previously reported that mild aerobic exercise can delay the formation of aortic aneurysm in a mouse model of MFS. In this study, we aimed to investigate the effects of various levels of exercise intensity on the progression of aortic aneurysm in the mouse model. Starting at 4 weeks of age, we subjected control and MFS mice to different levels of exercise intensity (8m/min, 10m/min, 15m/min, and 20m/min, corresponding to 55%, 65%, 75%, and 85% of VO2 max, respectively) on a treadmill for 30 minutes per day, five days a week for the duration of the study. At 24 weeks of age, aortic tissue were isolated and subjected to structural and functional studies using histology and wire myography in order to evaluate the effects of different exercise routines on elastin fragmentation and organization and aortic wall elasticity/stiffness. Our data shows that exercise training at the intensity levels between 55%-75% significantly reduces elastin fragmentation and disorganization, with less recovery observed in 85% MFS group. The reversibility of elasticity was also significantly restored in MFS mice subjected to 55%-75% intensity; however, the recovery was less pronounced in MFS mice subjected to 85% intensity. Furthermore, our data shows that smooth muscle cells (SMCs) contractilion in response to vasoconstrictor agent phenylephrine (100nM) is significantly reduced in MFS aorta (54.84 ± 1.63 mN/mm2) as compared to control (95.85 ± 3.04 mN/mm2). At 55% of intensity, exercise did not rescue SMCs contraction (63.45 ± 1.70 mN/mm2), while at higher intensity levels, SMCs contraction in response to phenylephrine was restored to levels similar to control aorta [65% (81.88 ± 4.57 mN/mm2), 75% (86.22 ± 3.84 mN/mm2), and 85% (83.91 ± 5.42 mN/mm2)]. This study provides the first time evidence that high intensity exercise (e.g. 85%) may not provide the most beneficial effects on aortic function (vasoconstriction) and structure (elastin fragmentation, aortic wall elasticity) during the progression of aortic aneurysm in MFS mice. On the other hand, based on our observations, medium intensity exercise (e.g. 65%) seems to provide the utmost protective effects on aortic structure and function in MFS mice. These findings provide new insights into the potential capacity, in which MFS patients could participate in various aerobic exercise routines, especially in young adults affected by cardiovascular complications particularly aortic aneurysm. This work was funded by Midwestern University Research Fund.Keywords: aerobic exercise, aortic aneurysm, aortic wall elasticity, elastin fragmentation, Marfan syndrome
Procedia PDF Downloads 38075 Implementation of Ecological and Energy-Efficient Building Concepts
Authors: Robert Wimmer, Soeren Eikemeier, Michael Berger, Anita Preisler
Abstract:
A relatively large percentage of energy and resource consumption occurs in the building sector. This concerns the production of building materials, the construction of buildings and also the energy consumption during the use phase. Therefore, the overall objective of this EU LIFE project “LIFE Cycle Habitation” (LIFE13 ENV/AT/000741) is to demonstrate innovative building concepts that significantly reduce CO₂emissions, mitigate climate change and contain a minimum of grey energy over their entire life cycle. The project is being realised with the contribution of the LIFE financial instrument of the European Union. The ultimate goal is to design and build prototypes for carbon-neutral and “LIFE cycle”-oriented residential buildings and make energy-efficient settlements the standard of tomorrow in line with the EU 2020 objectives. To this end, a resource and energy-efficient building compound is being built in Böheimkirchen, Lower Austria, which includes 6 living units and a community area as well as 2 single family houses with a total usable floor surface of approximately 740 m². Different innovative straw bale construction types (load bearing and pre-fabricated non loadbearing modules) together with a highly innovative energy-supply system, which is based on the maximum use of thermal energy for thermal energy services, are going to be implemented. Therefore only renewable resources and alternative energies are used to generate thermal as well as electrical energy. This includes the use of solar energy for space heating, hot water and household appliances like dishwasher or washing machine, but also a cooking place for the community area operated with thermal oil as heat transfer medium on a higher temperature level. Solar collectors in combination with a biomass cogeneration unit and photovoltaic panels are used to provide thermal and electric energy for the living units according to the seasonal demand. The building concepts are optimised by support of dynamic simulations. A particular focus is on the production and use of modular prefabricated components and building parts made of regionally available, highly energy-efficient, CO₂-storing renewable materials like straw bales. The building components will be produced in collaboration by local SMEs that are organised in an efficient way. The whole building process and results are monitored and prepared for knowledge transfer and dissemination including a trial living in the residential units to test and monitor the energy supply system and to involve stakeholders into evaluation and dissemination of the applied technologies and building concepts. The realised building concepts should then be used as templates for a further modular extension of the settlement in a second phase.Keywords: energy-efficiency, green architecture, renewable resources, sustainable building
Procedia PDF Downloads 14874 Analyzing the Mission Drift of Social Business: Case Study of Restaurant Providing Professional Training to At-Risk Youth
Authors: G. Yanay-Ventura, H. Desivilya Syna, K. Michael
Abstract:
Social businesses are based on the idea that an enterprise can be established for the sake of profit and, at the same time, with the aim of fulfilling social goals. Yet, the question of how these goals can be integrated in practice to derive parallel benefit in both realms still needs to be examined. Particularly notable in this context is the ‘governance challenge’ of social businesses, meaning the danger of the mission drifts from the social goal in the pursuit of good business. This study is based on an evaluation study of a social business that operates as a restaurant providing professional training to at-risk youth. The evaluation was based on the collection of a variety of data through interviews with stakeholders in the enterprise (directors and managers, business partners, social partners, and position holders in the restaurant and the social enterprise), a focus group consisting of the youth receiving the professional training, observations of the restaurant’s operation, and analysis of the social enterprise’s primary documents. The evaluation highlighted significant strengths of the social enterprise, including reaching relatively fast business sustainability, effective management of the restaurant, stable employment of the restaurant staff, and effective management of the social project. The social enterprise and business management have both enjoyed positive evaluations from a variety of stakeholders. Clearly, the restaurant was deemed by all a promising young business. However, the social project suffered from a 90% dropout rate among the youth entering its ranks, extreme monthly fluctuation in the number of youths participating, and a distinct minority of the youth who have succeeded in completing their training period. Possible explanations of the high dropout rate included the small number of cooks, which impeded the effectiveness of the training process and the provision of advanced cooking skills; lack of clarity regarding the essence and the elements of training; and lack of a meaningful peer group for the youth engaged in the program. Paradoxically, despite the stakeholders’ great appreciation for the social enterprise, the challenge of governability was also formidable, revealing a tangible risk of mission drift in the reduction of the social enterprise’s target population and a breach of the commitment made to the youth with regard to practical training. The risk of mission drifts emerged as a hidden and evasive issue for the stakeholders, who revealed a deep appreciation for the management and the outcomes of the social enterprise. The challenge of integration, therefore, requires an in-depth examination of how to maintain a successful business without hindering the achievement of the social goal. The study concludes that clear conceptualization of the training process and its aims, increased cooks’ participation in the social project, and novel conceptions with regard to the evaluation of success could serve to benefit the youth and impede mission drift.Keywords: evaluation study, management, mission drift, social business
Procedia PDF Downloads 11273 ChatGPT Performs at the Level of a Third-Year Orthopaedic Surgery Resident on the Orthopaedic In-training Examination
Authors: Diane Ghanem, Oscar Covarrubias, Michael Raad, Dawn LaPorte, Babar Shafiq
Abstract:
Introduction: Standardized exams have long been considered a cornerstone in measuring cognitive competency and academic achievement. Their fixed nature and predetermined scoring methods offer a consistent yardstick for gauging intellectual acumen across diverse demographics. Consequently, the performance of artificial intelligence (AI) in this context presents a rich, yet unexplored terrain for quantifying AI's understanding of complex cognitive tasks and simulating human-like problem-solving skills. Publicly available AI language models such as ChatGPT have demonstrated utility in text generation and even problem-solving when provided with clear instructions. Amidst this transformative shift, the aim of this study is to assess ChatGPT’s performance on the orthopaedic surgery in-training examination (OITE). Methods: All 213 OITE 2021 web-based questions were retrieved from the AAOS-ResStudy website. Two independent reviewers copied and pasted the questions and response options into ChatGPT Plus (version 4.0) and recorded the generated answers. All media-containing questions were flagged and carefully examined. Twelve OITE media-containing questions that relied purely on images (clinical pictures, radiographs, MRIs, CT scans) and could not be rationalized from the clinical presentation were excluded. Cohen’s Kappa coefficient was used to examine the agreement of ChatGPT-generated responses between reviewers. Descriptive statistics were used to summarize the performance (% correct) of ChatGPT Plus. The 2021 norm table was used to compare ChatGPT Plus’ performance on the OITE to national orthopaedic surgery residents in that same year. Results: A total of 201 were evaluated by ChatGPT Plus. Excellent agreement was observed between raters for the 201 ChatGPT-generated responses, with a Cohen’s Kappa coefficient of 0.947. 45.8% (92/201) were media-containing questions. ChatGPT had an average overall score of 61.2% (123/201). Its score was 64.2% (70/109) on non-media questions. When compared to the performance of all national orthopaedic surgery residents in 2021, ChatGPT Plus performed at the level of an average PGY3. Discussion: ChatGPT Plus is able to pass the OITE with a satisfactory overall score of 61.2%, ranking at the level of third-year orthopaedic surgery residents. More importantly, it provided logical reasoning and justifications that may help residents grasp evidence-based information and improve their understanding of OITE cases and general orthopaedic principles. With further improvements, AI language models, such as ChatGPT, may become valuable interactive learning tools in resident education, although further studies are still needed to examine their efficacy and impact on long-term learning and OITE/ABOS performance.Keywords: artificial intelligence, ChatGPT, orthopaedic in-training examination, OITE, orthopedic surgery, standardized testing
Procedia PDF Downloads 8772 Pentosan Polysulfate Sodium: A Potential Treatment to Improve Bone and Joint Manifestations of Mucopolysaccharidosis I
Authors: Drago Bratkovic, Curtis Gravance, David Ketteridge, Ravi Krishnan, Michael Imperiale
Abstract:
The mucopolysaccharidoses (MPSs) are a group of lysosomal storage diseases that have a common defect in the catabolism of glycosaminoglycans (GAGs). MPS I is the most common of the MPS diseases. Manifestations of MPS I include coarsening of facial features, corneal clouding, developmental delay, short stature, skeletal manifestations, hearing loss, cardiac valve disease, hepatosplenomegaly, and umbilical and inguinal hernias. Treatments for MPS I restore or activate the missing or deficient enzyme in the case of enzyme replacement therapy (ERT) and haematopoietic stem cell transplantation (HSCT). Pentosan polysulfate sodium (PPS) is a potential treatment to improve bone and joint manifestations of MPS I. The mechanisms of action of PPS that are relevant to the treatment of MPS I are the ability to: (i) Reduce systemic and accumulated GAG, (ii) Reduce inflammatory effects via the inhibition of NF-kB, resulting in the reduction in pro-inflammatory mediators. (iii) Reduce the expression of the pain mediator nerve growth factor in osteocytes from degenerating joints. (iv) Inhibit the cartilage degrading enzymes related to joint dysfunction in MPS I. PPS is being evaluated as an adjunctive therapy to ERT and/or HSCT in an open-label, single-centre, phase 2 study. Patients are ≥ 5 years of age with a diagnosis of MPS I and previously received HSCT and/or ERT. Three white, female, patients with MPS I-Hurler, ages 14, 15, and 19 years, and one, white male patient aged 15 years are enrolled. All were diagnosed at ≤2 years of age. All patients received HSCT ≤ 6 months after diagnosis. Two of the patients were treated with ERT prior to HSCT, and 1 patient received ERT commencing 3 months prior to HSCT. Two patients received 0.75mg/kg and 2 patients received 1.5mg/kg of PPS. PPS was well tolerated at doses of 0.75 and 1.5 mg/kg to 47 weeks of continuous dosing. Of the 19 adverse events (AEs), 2 were related to PPS. One AE was moderate (pre-syncope) and 1 was mild (injection site bruising), experienced in the same patient. All AEs were reported as mild or moderate. There have been no SAEs. One subject experienced a COVID-19 infection and PPS was interrupted. The MPS I signature GAG fragments, sulfated disaccharide and UA-HNAc S, tended to decrease in 3 patients from baseline through Week 25. Week 25 GAG data are pending for the 4th patient. Overall, most biomarkers (inflammatory, cartilage degeneration, and bone turnover) evaluated in the 3 patients with 25-week assessments have indicated either no change or a reduction in levels compared to baseline. In 3 patients, there was a trend toward improvement in the 2MWT from baseline to Week 48 with > 100% increase in 1 patient (01-201). In the 3 patients that had Week 48 assessments, patients and proxies reported improvement in PGIC, including “worthwhile difference” (n=1), or “made all the difference” (n=2).Keywords: MPS I, pentosan polysulfate sodium, clinical study, 2MWT, QoL
Procedia PDF Downloads 11071 Cognitive Deficits and Association with Autism Spectrum Disorder and Attention Deficit Hyperactivity Disorder in 22q11.2 Deletion Syndrome
Authors: Sinead Morrison, Ann Swillen, Therese Van Amelsvoort, Samuel Chawner, Elfi Vergaelen, Michael Owen, Marianne Van Den Bree
Abstract:
22q11.2 Deletion Syndrome (22q11.2DS) is caused by the deletion of approximately 60 genes on chromosome 22 and is associated with high rates of neurodevelopmental disorders such as Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorders (ASD). The presentation of these disorders in 22q11.2DS is reported to be comparable to idiopathic forms and therefore presents a valuable model for understanding mechanisms of neurodevelopmental disorders. Cognitive deficits are thought to be a core feature of neurodevelopmental disorders, and possibly manifest in behavioural and emotional problems. There have been mixed findings in 22q11.2DS on whether the presence of ADHD or ASD is associated with greater cognitive deficits. Furthermore, the influence of developmental stage has never been taken into account. The aim was therefore to examine whether the presence of ADHD or ASD was associated with cognitive deficits in childhood and/or adolescence in 22q11.2DS. We conducted the largest study to date of this kind in 22q11.2DS. The same battery of tasks measuring processing speed, attention and spatial working memory were completed by 135 participants with 22q11.2DS. Wechsler IQ tests were completed, yielding Full Scale (FSIQ), Verbal (VIQ) and Performance IQ (PIQ). Age-standardised difference scores were produced for each participant. Developmental stages were defined as children (6-10 years) and adolescents (10-18 years). ADHD diagnosis was ascertained from a semi-structured interview with a parent. ASD status was ascertained from a questionnaire completed by a parent. Interaction and main effects of cognitive performance of those with or without a diagnosis of ADHD or ASD in childhood or adolescence were conducted with 2x2 ANOVA. Significant interactions were followed up with t-tests of simple effects. Adolescents with ASD displayed greater deficits in all measures (processing speed, p = 0.022; sustained attention, p = 0.016; working memory, p = 0.006) than adolescents without ASD; there was no difference between children with and without ASD. There were no significant differences on IQ measures. Both children and adolescents with ADHD displayed greater deficits on sustained attention (p = 0.002) than those without ADHD. There were no significant differences on any other measures for ADHD. Magnitude of cognitive deficit in individuals with 22q11.2DS varied by cognitive domain, developmental stage and presence of neurodevelopmental disorder. Adolescents with 22q11.2DS and ASD showed greater deficits on all measures, which suggests there may be a sensitive period in childhood to acquire these domains, or reflect increasing social and academic demands in adolescence. The finding of poorer sustained attention in children and adolescents with ADHD supports previous research and suggests a specific deficit which can be separated from processing speed and working memory. This research provides unique insights into the association of ASD and ADHD with cognitive deficits in a group at high genomic risk of neurodevelopmental disorders.Keywords: 22q11.2 deletion syndrome, attention deficit hyperactivity disorder, autism spectrum disorder, cognitive development
Procedia PDF Downloads 14870 An Emergentist Defense of Incompatibility between Morally Significant Freedom and Causal Determinism
Authors: Lubos Rojka
Abstract:
The common perception of morally responsible behavior is that it presupposes freedom of choice, and that free decisions and actions are not determined by natural events, but by a person. In other words, the moral agent has the ability and the possibility of doing otherwise when making morally responsible decisions, and natural causal determinism cannot fully account for morally significant freedom. The incompatibility between a person’s morally significant freedom and causal determinism appears to be a natural position. Nevertheless, some of the most influential philosophical theories on moral responsibility are compatibilist or semi-compatibilist, and they exclude the requirement of alternative possibilities, which contradicts the claims of classical incompatibilism. The compatibilists often employ Frankfurt-style thought experiments to prove their theory. The goal of this paper is to examine the role of imaginary Frankfurt-style examples in compatibilist accounts. More specifically, the compatibilist accounts defended by John Martin Fischer and Michael McKenna will be inserted into the broader understanding of a person elaborated by Harry Frankfurt, Robert Kane and Walter Glannon. Deeper analysis reveals that the exclusion of alternative possibilities based on Frankfurt-style examples is problematic and misleading. A more comprehensive account of moral responsibility and morally significant (source) freedom requires higher order complex theories of human will and consciousness, in which rational and self-creative abilities and a real possibility to choose otherwise, at least on some occasions during a lifetime, are necessary. Theoretical moral reasons and their logical relations seem to require a sort of higher-order agent-causal incompatibilism. The ability of theoretical or abstract moral reasoning requires complex (strongly emergent) mental and conscious properties, among which an effective free will, together with first and second-order desires. Such a hierarchical theoretical model unifies reasons-responsiveness, mesh theory and emergentism. It is incompatible with physical causal determinism, because such determinism only allows non-systematic processes that may be hard to predict, but not complex (strongly) emergent systems. An agent’s effective will and conscious reflectivity is the starting point of a morally responsible action, which explains why a decision is 'up to the subject'. A free decision does not always have a complete causal history. This kind of an emergentist source hyper-incompatibilism seems to be the best direction of the search for an adequate explanation of moral responsibility in the traditional (merit-based) sense. Physical causal determinism as a universal theory would exclude morally significant freedom and responsibility in the traditional sense because it would exclude the emergence of and supervenience by the essential complex properties of human consciousness.Keywords: consciousness, free will, determinism, emergence, moral responsibility
Procedia PDF Downloads 16469 Analysis of Influencing Factors on Infield-Logistics: A Survey of Different Farm Types in Germany
Authors: Michael Mederle, Heinz Bernhardt
Abstract:
The Management of machine fleets or autonomous vehicle control will considerably increase efficiency in future agricultural production. Especially entire process chains, e.g. harvesting complexes with several interacting combine harvesters, grain carts, and removal trucks, provide lots of optimization potential. Organization and pre-planning ensure to get these efficiency reserves accessible. One way to achieve this is to optimize infield path planning. Particularly autonomous machinery requires precise specifications about infield logistics to be navigated effectively and process optimized in the fields individually or in machine complexes. In the past, a lot of theoretical optimization has been done regarding infield logistics, mainly based on field geometry. However, there are reasons why farmers often do not apply the infield strategy suggested by mathematical route planning tools. To make the computational optimization more useful for farmers this study focuses on these influencing factors by expert interviews. As a result practice-oriented navigation not only to the field but also within the field will be possible. The survey study is intended to cover the entire range of German agriculture. Rural mixed farms with simple technology equipment are considered as well as large agricultural cooperatives which farm thousands of hectares using track guidance and various other electronic assistance systems. First results show that farm managers using guidance systems increasingly attune their infield-logistics on direction giving obstacles such as power lines. In consequence, they can avoid inefficient boom flippings while doing plant protection with the sprayer. Livestock farmers rather focus on the application of organic manure with its specific requirements concerning road conditions, landscape terrain or field access points. Cultivation of sugar beets makes great demands on infield patterns because of its particularities such as the row crop system or high logistics demands. Furthermore, several machines working in the same field simultaneously influence each other, regardless whether or not they are of the equal type. Specific infield strategies always are based on interactions of several different influences and decision criteria. Single working steps like tillage, seeding, plant protection or harvest mostly cannot be considered each individually. The entire production process has to be taken into consideration to detect the right infield logistics. One long-term objective of this examination is to integrate the obtained influences on infield strategies as decision criteria into an infield navigation tool. In this way, path planning will become more practical for farmers which is a basic requirement for automatic vehicle control and increasing process efficiency.Keywords: autonomous vehicle control, infield logistics, path planning, process optimizing
Procedia PDF Downloads 232