Search results for: R statistical software
146 Multiple Plant-Based Cell Suspension as a Bio-Ink for 3D Bioprinting Applications in Food Technology
Authors: Yusuf Hesham Mohamed
Abstract:
Introduction: Three-dimensional printing technology includes multiple procedures that fabricate three-dimensional objects through consecutively layering two-dimensional cross-sections on top of each other. 3D bioprinting is a promising field of 3D printing, which fabricates tissues and organs by accurately controlling the proper arrangement of diverse biological components. 3D bioprinting uses software and prints biological materials and their supporting components layer-by-layer on a substrate or in a tissue culture plate to produce complex live tissues and organs. 3D food printing is an emerging field of 3D bioprinting in which the 3D printed products are food products that are cheap, require less effort to produce, and have more desirable traits. The Aim of the Study is the development of an affordable 3D bioprinter by altering a locally made CNC instrument with an open-source platform to suit the 3D bio-printer purposes. Later, we went through applying the prototype in several applications regarding food technology and drug testing, including the organ-On-Chip. Materials and Methods: An off-the-shelf 3D printer was modified by designing and fabricating the syringe unit, which was designed on the basis of the Milli-fluidics system. Sodium alginate and gelatin hydrogels were prepared, followed by leaf cell suspension preparation from narrow sections of Fragaria’s viable leaves. The desired 3D structure was modeled, and 3D printing preparations took place. Cell-free and cell-laden hydrogels were printed at room temperature under sterile conditions. Post printing curing process was performed. The printed structure was further studied. Results: Positive results have been achieved using the altered 3D bioprinter where a 3D hydrogel construct of two layers made of the combination of sodium alginate to gelatin (15%: 0.5%) has been printed. DLP 3D printer was used to design the syringe component with a transparent PLA-Pro resin for the creation of a microfluidics system having two channels altered to the double extruder. The hydrogel extruder’s design was based on peristaltic pumps, which utilized a stepper motor. The design and fabrication were made using DIY-3D printed parts. Hard plastic PLA was the material utilized for printing. SEM was used to carry out the porous 3D construct imaging. Multiple physical and chemical tests were performed in order to ensure that the cell line was suitable for hosting. Fragaria plant was developed by suspending Fragaria’s cells from its leaves using the 3D bioprinter. Conclusion: 3D bioprinting is considered to be an emerging scientific field that can facilitate and improve many scientific tests and studies. Thus, having a 3D bioprinter in labs is considered to be an essential requirement. 3D bioprinters are very expensive; however, the fabrication of a 3D printer into a 3D bioprinter can lower the cost of the bioprinter. The 3D bioprinter implemented made use of peristaltic pumps instead of syringe-based pumps in order to extend the ability to print multiple types of materials and cells.Keywords: scaffold, eco on chip, 3D bioprinter, DLP printer
Procedia PDF Downloads 119145 Endometrial Ablation and Resection Versus Hysterectomy for Heavy Menstrual Bleeding: A Systematic Review and Meta-Analysis of Effectiveness and Complications
Authors: Iliana Georganta, Clare Deehan, Marysia Thomson, Miriam McDonald, Kerrie McNulty, Anna Strachan, Elizabeth Anderson, Alyaa Mostafa
Abstract:
Context: A meta-analysis of randomized controlled trials (RCTs) comparing hysterectomy versus endometrial ablation and resection in the management of heavy menstrual bleeding. Objective: To evaluate the clinical efficacy, satisfaction rates and adverse events of hysterectomy compared to more minimally invasive techniques in the treatment of HMB. Evidence Acquisition: A literature search was performed for all RCTs and quasi-RCTs comparing hysterectomy with either endometrial ablation endometrial resection of both. The search had no language restrictions and was last updated in June 2020 using MEDLINE, EMBASE, Cochrane Central Register of Clinical Trials, PubMed, Google Scholar, PsycINFO, Clinicaltrials.gov and Clinical trials. EU. In addition, a manual search of the abstract databases of the European Haemophilia Conference on women's health was performed and further studies were identified from references of acquired papers. The primary outcomes were patient-reported and objective reduction in heavy menstrual bleeding up to 2 years and after 2 years. Secondary outcomes included satisfaction rates, pain, adverse events short and long term, quality of life and sexual function, further surgery, duration of surgery and hospital stay and time to return to work and normal activities. Data were analysed using RevMan software. Evidence synthesis: 12 studies and a total of 2028 women were included (hysterectomy: n = 977 women vs endometrial ablation or resection: n = 1051 women). Hysterectomy was compared with endometrial ablation only in five studies (Lin, Dickersin, Sesti, Jain, Cooper) and endometrial resection only in five studies (Gannon, Schulpher, O’Connor, Crosignani, Zupi) and a mixture of the Ablation and Resection in two studies (Elmantwe, Pinion). Of the 1² studies, 10 reported women’s perception of bleeding symptoms as improved. Meta-analysis showed that women in the hysterectomy group were more likely to show improvement in bleeding symptoms when compared with endometrial ablation or resection up to 2-year follow-up (RR 0.75, 95% CI 0.71 to 0.79, I² = 95%). Objective outcomes of improvement in bleeding also favored hysterectomy. Patient satisfaction was higher after hysterectomy within the 2 years follow-up (RR: 0.90, 95%CI: 0.86 to 0.94, I²:58%), however, there was no significant difference between the two groups at more than 2 years follow up. Sepsis (RR: 0.03, 95% CI 0.002 to 0.56; 1 study), wound infection (RR: 0.05, 95% CI: 0.01 to 0.28, I²: 0%, 3 studies) and Urinary tract infection (UTI) (RR: 0.20, 95% CI: 0.10 to 0.42, I²: 0%, 4 studies) all favoured hysteroscopic techniques. Fluid overload (RR: 7.80, 95% CI: 2.16 to 28.16, I² :0%, 4 studies) and perforation (RR: 5.42, 95% CI: 1.25 to 23.45, I²: 0%, 4 studies) however favoured hysterectomy in the short term. Conclusions: This meta-analysis has demonstrated that endometrial ablation and endometrial resection are both viable options when compared with hysterectomy for the treatment of heavy menstrual bleeding. Hysteroscopic procedures had better outcomes in the short term with fewer adverse events including wound infection, UTI and sepsis. The hysterectomy performed better when measuring more long-term impacts such as recurrence of symptoms, overall satisfaction at two years and the need for further treatment or surgery.Keywords: menorrhagia, hysterectomy, ablation, resection
Procedia PDF Downloads 155144 Comparative Study on Effectiveness and Safety of Oral Antidiabetic Medications in Patients with Type 2 Diabetes Mellitus in a Tertiary Care Hospital of Bangalore, South India - A Prospective Cohort Study
Authors: Shobha Rani R. Hiremath, Keerthana R., Madhuvan H. S.
Abstract:
BACKGROUND: Type 2 Diabetes is a chronic health condition where the body cannot effectively use the insulin it produces, leading to elevated blood sugar levels It is often associated with lifestyle factors and insulin resistance. Globally, diabetes is on the rise, with urban areas like Bangalore seeing a surge due to lifestyle changes, stress, and dietary habits. To manage diabetes effectively, over 50 medications are available, each serving to regulate blood sugar through different mechanisms. This reflects the complex and individualized nature of diabetes treatment. Given the increase in medications for Type 2 diabetes mellitus, it is important to evaluate their effectiveness and safety so that clinicians can make informed choices while treating their patients. OBJECTIVES: To evaluate the effectiveness of various anti-diabetic medications used in the hospital in Type 2 diabetes patients by monitoring their HbA1c levels. To assess the safety of these medications by monitoring Adverse drug reactions if any. METHODOLOGY Design : Prospective Cohort study, Study period: 8 months, Sample Size: 100 patients, Inclusion Criteria: Patients above 18 years of both genders who were diagnosed with T2DM and who were prescribed oral hypoglycaemic agents. Exclusion Criteria: Diabetic patients with hepatic/renal failure, patients prescribed with insulin /not prescribed with OHAs and patients who were terminally ill, pregnant and lactating patients. Source of Data: Prescriptions, lab reports, ECG reports. Data collection forms were used to collect data which consisted of patient demographic details, drugs prescribed, laboratory investigations such as HbA1C, FBS, PPBS , ECG and any ADRs experienced. Data was collected at baseline, 3 months, and 6 months. It was statistically analyzed using SPSS (version 26) software. RESULTS: Greater number of patients (46%) were in the age group of greater than 61 years. 43 patients had no co-morbidities whereas 51 patients had Hypertension as the co morbidity. Basically 5 Drug combinations were prescribed as follows. Combination 1: Tablet Metformin HCL + Glimepiride (500, 2 mg) : 1-0-1, Combination 2: Tablet Sitagliptin + Metformin HCL (50, 500 mg) : 1-0-1, Combination 3: Tablet Vildagliptin + Metformin HCL (50, 500 mg): 1-0-1, Combination 4: Tablet Voglibose+ Glimepiride+ Metformin HCL (0.2 ,2 ,500mg): : 1-0-1, Combination 5: Tablet Voglibose+ Glimepiride+ Metformin HCL (0.2, 2 ,500mg): : 1-0-1 and T. Sitagliptin +Metformin HCL (50, 500 mg): 0-1-0. Combination 5 (Voglibose, Glimepiride, Metformin, Sitagliptin) was most effective in reducing HbA1c levels, showing a significant decrease (-0.00682, p = 0.001), followed by Combinations 4 and 3. However, Combination 5 also had the highest incidence of gastrointestinal side effects (72.7%) and ECG abnormalities (27.3%). Combination 1 (Metformin + Glimepiride) had the highest occurrence of hypoglycemia due to Glimepiride's insulin-stimulating effects. Weight loss was most notable in Combination 5, affecting 36.36% of patients. CONCLUSION: The enhanced effectiveness of Combinations 3, 4, and 5 suggests that a multi-drug approach that incorporates different mechanisms of action is more effective in managing HbA1c levels in patients with diabetes. Adverse effect profiles should be considered for personalized treatment strategies.Keywords: type 2 diabetes, safety, oral anti diabetic medications, effectiveness
Procedia PDF Downloads 9143 Numerical Modeling of Timber Structures under Varying Humidity Conditions
Authors: Sabina Huč, Staffan Svensson, Tomaž Hozjan
Abstract:
Timber structures may be exposed to various environmental conditions during their service life. Often, the structures have to resist extreme changes in the relative humidity of surrounding air, with simultaneously carrying the loads. Wood material response for this load case is seen as increasing deformation of the timber structure. Relative humidity variations cause moisture changes in timber and consequently shrinkage and swelling of the material. Moisture changes and loads acting together result in mechano-sorptive creep, while sustained load gives viscoelastic creep. In some cases, magnitude of the mechano-sorptive strain can be about five times the elastic strain already at low stress levels. Therefore, analyzing mechano-sorptive creep and its influence on timber structures’ long-term behavior is of high importance. Relatively many one-dimensional rheological models for rheological behavior of wood can be found in literature, while a number of models coupling creep response in each material direction is limited. In this study, mathematical formulation of a coupled two-dimensional mechano-sorptive model and its application to the experimental results are presented. The mechano-sorptive model constitutes of a moisture transport model and a mechanical model. Variation of the moisture content in wood is modelled by multi-Fickian moisture transport model. The model accounts for processes of the bound-water and water-vapor diffusion in wood, that are coupled through sorption hysteresis. Sorption defines a nonlinear relation between moisture content and relative humidity. Multi-Fickian moisture transport model is able to accurately predict unique, non-uniform moisture content field within the timber member over time. Calculated moisture content in timber members is used as an input to the mechanical analysis. In the mechanical analysis, the total strain is assumed to be a sum of the elastic strain, viscoelastic strain, mechano-sorptive strain, and strain due to shrinkage and swelling. Mechano-sorptive response is modelled by so-called spring-dashpot type of a model, that proved to be suitable for describing creep of wood. Mechano-sorptive strain is dependent on change of moisture content. The model includes mechano-sorptive material parameters that have to be calibrated to the experimental results. The calibration is made to the experiments carried out on wooden blocks subjected to uniaxial compressive loaded in tangential direction and varying humidity conditions. The moisture and the mechanical model are implemented in a finite element software. The calibration procedure gives the required, distinctive set of mechano-sorptive material parameters. The analysis shows that mechano-sorptive strain in transverse direction is present, though its magnitude and variation are substantially lower than the mechano-sorptive strain in the direction of loading. The presented mechano-sorptive model enables observing real temporal and spatial distribution of the moisture-induced strains and stresses in timber members. Since the model’s suitability for predicting mechano-sorptive strains is shown and the required material parameters are obtained, a comprehensive advanced analysis of the stress-strain state in timber structures, including connections subjected to constant load and varying humidity is possible.Keywords: mechanical analysis, mechano-sorptive creep, moisture transport model, timber
Procedia PDF Downloads 245142 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing
Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl
Abstract:
This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization
Procedia PDF Downloads 157141 Sustainable Strategies for Managing Rural Tourism in Abyaneh Village, Isfahan
Authors: Hoda Manafian, Stephen Holland
Abstract:
Problem statement: Rural areas in Iran are one of the most popular tourism destinations. Abyaneh Village is one of them with a long history behind it (more than 1500 years) which is a national heritage site and also is nominated as a world heritage site in UNESCO tentative list from 2007. There is a considerable foundation of religious-cultural heritage and also agricultural history and activities. However, this heritage site suffers from mass tourism which is beyond its social and physical carrying capacity, since the annual number of tourists exceed 500,000. While there are four adjacent villages around Abyaneh which can benefit from advantages of tourism. Local managers also can at the same time prorate the tourists’ flux of Abyaneh on those other villages especially in high-season. The other villages have some cultural and natural tourism attractions as well. Goal: The main goal of this study is to identify a feasible development strategy according to the current strengths, weaknesses, opportunities and threats of rural tourism in this area (Abyaneh Village and four adjacent villages). This development strategy can lead to sustainable management of these destinations. Method: To this end, we used SWOT analysis as a well-established tool for conducting a situational analysis to define a sustainable development strategy. The procedures included following steps: 1) Extracting variables of SWOT chart based on interviewing tourism experts (n=13), local elites (n=17) and personal observations of researcher. 2) Ranking the extracted variables from 1-5 by 13 tourism experts in Isfahan Cultural Heritage, Handcrafts and Tourism Organization (ICHTO). 3) Assigning weights to the ranked variables using Expert Choice Software and the method of Analytical Hierarchical Process (AHP). 4) Defining the Total Weighted Score (TWS) for each part of SWOT chart. 5) Identifying the strategic position according to the TWS 6) Selecting the best development strategy based on the defined position using the Strategic Position and Action Evaluation (SPACE) matrix. 7) Assessing the Probability of Strategic Success (PSS) for the preferred strategy using relevant formulas. 8) Defining two feasible alternatives for sustainable development. Results and recommendations: Cultural heritage attractions were first-ranked variable in strength chart and also lack of sufficient amenities for one-day tourists (catering, restrooms, parking, and accommodation) was firs-ranked weakness. The strategic position was in ST (Strength-Threat) quadrant which is a maxi-mini position. According this position we would suggest ‘Competitive Strategy’ as a development strategy which means relying on strengths in order to neutralization threats. The result of Probability of Strategic Success assessment which was 0.6 shows that this strategy could be successful. The preferred approach for competitive strategy could be rebranding the market of tourism in this area. Rebranding the market can be achieved by two main alternatives which are based on the current strengths and threats: 1) Defining a ‘Heritage Corridor’ from first adjacent village to Abyaneh as a final destination. 2) Focus on ‘educational tourism’ versus mass tourism and also green tourism by developing agritourism in that corridor.Keywords: Abyaneh village, rural tourism, SWOT analysis, sustainable strategies
Procedia PDF Downloads 384140 Finite Element Analysis of Human Tarsals, Meta Tarsals and Phalanges for Predicting probable location of Fractures
Authors: Irfan Anjum Manarvi, Fawzi Aljassir
Abstract:
Human bones have been a keen area of research over a long time in the field of biomechanical engineering. Medical professionals, as well as engineering academics and researchers, have investigated various bones by using medical, mechanical, and materials approaches to discover the available body of knowledge. Their major focus has been to establish properties of these and ultimately develop processes and tools either to prevent fracture or recover its damage. Literature shows that mechanical professionals conducted a variety of tests for hardness, deformation, and strain field measurement to arrive at their findings. However, they considered these results accuracy to be insufficient due to various limitations of tools, test equipment, difficulties in the availability of human bones. They proposed the need for further studies to first overcome inaccuracies in measurement methods, testing machines, and experimental errors and then carry out experimental or theoretical studies. Finite Element analysis is a technique which was developed for the aerospace industry due to the complexity of design and materials. But over a period of time, it has found its applications in many other industries due to accuracy and flexibility in selection of materials and types of loading that could be theoretically applied to an object under study. In the past few decades, the field of biomechanical engineering has also started to see its applicability. However, the work done in the area of Tarsals, metatarsals and phalanges using this technique is very limited. Therefore, present research has been focused on using this technique for analysis of these critical bones of the human body. This technique requires a 3-dimensional geometric computer model of the object to be analyzed. In the present research, a 3d laser scanner was used for accurate geometric scans of individual tarsals, metatarsals, and phalanges from a typical human foot to make these computer geometric models. These were then imported into a Finite Element Analysis software and a length refining process was carried out prior to analysis to ensure the computer models were true representatives of actual bone. This was followed by analysis of each bone individually. A number of constraints and load conditions were applied to observe the stress and strain distributions in these bones under the conditions of compression and tensile loads or their combination. Results were collected for deformations in various axis, and stress and strain distributions were observed to identify critical locations where fracture could occur. A comparative analysis of failure properties of all the three types of bones was carried out to establish which of these could fail earlier which is presented in this research. Results of this investigation could be used for further experimental studies by the academics and researchers, as well as industrial engineers, for development of various foot protection devices or tools for surgical operations and recovery treatment of these bones. Researchers could build up on these models to carryout analysis of a complete human foot through Finite Element analysis under various loading conditions such as walking, marching, running, and landing after a jump etc.Keywords: tarsals, metatarsals, phalanges, 3D scanning, finite element analysis
Procedia PDF Downloads 329139 A Multivariate Exploratory Data Analysis of a Crisis Text Messaging Service in Order to Analyse the Impact of the COVID-19 Pandemic on Mental Health in Ireland
Authors: Hamda Ajmal, Karen Young, Ruth Melia, John Bogue, Mary O'Sullivan, Jim Duggan, Hannah Wood
Abstract:
The Covid-19 pandemic led to a range of public health mitigation strategies in order to suppress the SARS-CoV-2 virus. The drastic changes in everyday life due to lockdowns had the potential for a significant negative impact on public mental health, and a key public health goal is to now assess the evidence from available Irish datasets to provide useful insights on this issue. Text-50808 is an online text-based mental health support service, established in Ireland in 2020, and can provide a measure of revealed distress and mental health concerns across the population. The aim of this study is to explore statistical associations between public mental health in Ireland and the Covid-19 pandemic. Uniquely, this study combines two measures of emotional wellbeing in Ireland: (1) weekly text volume at Text-50808, and (2) emotional wellbeing indicators reported by respondents of the Amárach public opinion survey, carried out on behalf of the Department of Health, Ireland. For this analysis, a multivariate graphical exploratory data analysis (EDA) was performed on the Text-50808 dataset dated from 15th June 2020 to 30th June 2021. This was followed by time-series analysis of key mental health indicators including: (1) the percentage of daily/weekly texts at Text-50808 that mention Covid-19 related issues; (2) the weekly percentage of people experiencing anxiety, boredom, enjoyment, happiness, worry, fear and stress in Amárach survey; and Covid-19 related factors: (3) daily new Covid-19 case numbers; (4) daily stringency index capturing the effect of government non-pharmaceutical interventions (NPIs) in Ireland. The cross-correlation function was applied to measure the relationship between the different time series. EDA of the Text-50808 dataset reveals significant peaks in the volume of texts on days prior to level 3 lockdown and level 5 lockdown in October 2020, and full level 5 lockdown in December 2020. A significantly high positive correlation was observed between the percentage of texts at Text-50808 that reported Covid-19 related issues and the percentage of respondents experiencing anxiety, worry and boredom (at a lag of 1 week) in Amárach survey data. There is a significant negative correlation between percentage of texts with Covid-19 related issues and percentage of respondents experiencing happiness in Amárach survey. Daily percentage of texts at Text-50808 that reported Covid-19 related issues to have a weak positive correlation with daily new Covid-19 cases in Ireland at a lag of 10 days and with daily stringency index of NPIs in Ireland at a lag of 2 days. The sudden peaks in text volume at Text-50808 immediately prior to new restrictions in Ireland indicate an association between a rise in mental health concerns following the announcement of new restrictions. There is also a high correlation between emotional wellbeing variables in the Amárach dataset and the number of weekly texts at Text-50808, and this confirms that Text-50808 reflects overall public sentiment. This analysis confirms the benefits of the texting service as a community surveillance tool for mental health in the population. This initial EDA will be extended to use multivariate modeling to predict the effect of additional Covid-19 related factors on public mental health in Ireland.Keywords: COVID-19 pandemic, data analysis, digital health, mental health, public health, digital health
Procedia PDF Downloads 142138 Dysphagia Tele Assessment Challenges Faced by Speech and Swallow Pathologists in India: Questionnaire Study
Authors: B. S. Premalatha, Mereen Rose Babu, Vaishali Prabhu
Abstract:
Background: Dysphagia must be assessed, either subjectively or objectively, in order to properly address the swallowing difficulty. Providing therapeutic care to patients with dysphagia via tele mode was one approach for providing clinical services during the COVID-19 epidemic. As a result, the teleassessment of dysphagia has increased in India. Aim: This study aimed to identify challenges faced by Indian SLPs while providing teleassessment to individuals with dysphagia during the outbreak of COVID-19 from 2020 to 2021. Method: After receiving approval from the institute's institutional review board and ethics committee, the current study was carried out. The study was cross-sectional in nature and lasted from 2020 to 2021. The study enrolled participants who met the inclusion and exclusion criteria of the study. It was decided to recruit roughly 246 people based on the sample size calculations. The research was done in three stages: questionnaire development and content validation, questionnaire administration. Five speech and hearing professionals' content verified the questionnaire for faults and clarity. Participants received questionnaires via various social media platforms such as e-mail and WhatsApp, which were written in Microsoft Word and then converted to Google Forms. SPSS software was used to examine the data. Results: In light of the obstacles that Indian SLPs encounter, the study's findings were examined. Only 135 people responded. During the COVID-19 lockdowns, 38% of participants said they did not deal with dysphagia patients. After the lockout, 70.4% of SLPs kept working with dysphagia patients, while 29.6% did not. From the beginning of the oromotor examination, the main problems in completing tele evaluation of dysphagia have been highlighted. Around 37.5% of SLPs said they don't undertake the OPME online because of difficulties doing the evaluation, such as the need for repeated instructions from patients and family members and trouble visualizing structures in various positions. The majority of SLPs' online assessments were inefficient and time-consuming. A bigger percentage of SLPs stated that they will not advocate tele evaluation in dysphagia to their colleagues. SLPs' use of dysphagia assessment has decreased as a result of the epidemic. When it came to the amount of food, the majority of people proposed a small amount. Apart from placing the patient for assessment and gaining less cooperation from the family, most SLPs found that Internet speed was a source of concern and a barrier. Hearing impairment and the presence of a tracheostomy in patients with dysphagia proved to be the most difficult conditions to treat online. For patients with NPO, the majority of SLPs did not advise tele-evaluation. In the anterior region of the oral cavity, oral meal residue was more visible. The majority of SLPs reported more anterior than posterior leakage. Even while the majority of SLPs could detect aspiration by coughing, many found it difficult to discern the gurgling tone of speech after swallowing. Conclusion: The current study sheds light on the difficulties that Indian SLPs experience when assessing dysphagia via tele mode, indicating that tele-assessment of dysphagia is still to gain importance in India.Keywords: dysphagia, teleassessment, challenges, Indian SLP
Procedia PDF Downloads 136137 A Model to Assess Sustainability Using Multi-Criteria Analysis and Geographic Information Systems: A Case Study
Authors: Antonio Boggia, Luisa Paolotti, Gianluca Massei, Lucia Rocchi, Elaine Pace, Maria Attard
Abstract:
The aim of this paper is to present a methodology and a computer model for sustainability assessment based on the integration of Multi-criteria Decision Analysis (MCDA) with a Geographic Information System (GIS). It presents the result of a study for the implementation of a model for measuring sustainability to address the policy actions for the improvement of sustainability at territory level. The aim is to rank areas in order to understand the specific technical and/or financial support that is required to develop sustainable growth. Assessing sustainable development is a multidimensional problem: economic, social and environmental aspects have to be taken into account at the same time. The tool for a multidimensional representation is a proper set of indicators. The set of indicators must be integrated into a model, that is an assessment methodology, to be used for measuring sustainability. The model, developed by the Environmental Laboratory of the University of Perugia, is called GeoUmbriaSUIT. It is a calculation procedure developed as a plugin working in the open-source GIS software QuantumGIS. The multi-criteria method used within GeoUmbriaSUIT is the algorithm TOPSIS (Technique for Order Preference by Similarity to Ideal Design), which defines a ranking based on the distance from the worst point and the closeness to an ideal point, for each of the criteria used. For the sustainability assessment procedure, GeoUmbriaSUIT uses a geographic vector file where the graphic data represent the study area and the single evaluation units within it (the alternatives, e.g. the regions of a country, or the municipalities of a region), while the alphanumeric data (attribute table), describe the environmental, economic and social aspects related to the evaluation units by means of a set of indicators (criteria). The use of the algorithm available in the plugin allows to treat individually the indicators representing the three dimensions of sustainability, and to compute three different indices: environmental index, economic index and social index. The graphic output of the model allows for an integrated assessment of the three dimensions, avoiding aggregation. The presence of separate indices and graphic output make GeoUmbriaSUIT a readable and transparent tool, since it doesn’t produce an aggregate index of sustainability as final result of the calculations, which is often cryptic and difficult to interpret. In addition, it is possible to develop a “back analysis”, able to explain the positions obtained by the alternatives in the ranking, based on the criteria used. The case study presented is an assessment of the level of sustainability in the six regions of Malta, an island state in the middle of the Mediterranean Sea and the southernmost member of the European Union. The results show that the integration of MCDA-GIS is an adequate approach for sustainability assessment. In particular, the implemented model is able to provide easy to understand results. This is a very important condition for a sound decision support tool, since most of the time decision makers are not experts and need understandable output. In addition, the evaluation path is traceable and transparent.Keywords: GIS, multi-criteria analysis, sustainability assessment, sustainable development
Procedia PDF Downloads 289136 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil
Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes
Abstract:
Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey
Procedia PDF Downloads 173135 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis
Authors: Serhat Tüzün, Tufan Demirel
Abstract:
Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review
Procedia PDF Downloads 279134 Entrepreneurial Venture Creation through Anchor Event Activities: Pop-Up Stores as On-Site Arenas
Authors: Birgit A. A. Solem, Kristin Bentsen
Abstract:
Scholarly attention in entrepreneurship is currently directed towards understanding entrepreneurial venture creation as a process -the journey of new economic activities from nonexistence to existence often studied through flow- or network models. To complement existing research on entrepreneurial venture creation with more interactivity-based research of organized activities, this study examines two pop-up stores as anchor events involving on-site activities of fifteen participating entrepreneurs launching their new ventures. The pop-up stores were arranged in two middle-sized Norwegian cities and contained different brand stores that brought together actors of sub-networks and communities executing venture creation activities. The pop-up stores became on-site arenas for the entrepreneurs to create, maintain, and rejuvenate their networks, at the same time as becoming venues for temporal coordination of activities involving existing and potential customers in their venture creation. In this work, we apply a conceptual framework based on frequently addressed dilemmas within entrepreneurship theory (discovery/creation, causation/effectuation) to further shed light on the broad aspect of on-site anchor event activities and their venture creation outcomes. The dilemma-based concepts are applied as an analytic toolkit to pursue answers regarding the nature of anchor event activities typically found within entrepreneurial venture creation and how these anchor event activities affect entrepreneurial venture creation outcomes. Our study combines researcher participation with 200 hours of observation and twenty in-depth interviews. Data analysis followed established guidelines for hermeneutic analysis and was intimately intertwined with ongoing data collection. Data was coded and categorized in NVivo 12 software, and iterated several times as patterns were steadily developing. Our findings suggest that core anchor event activities typically found within entrepreneurial venture creation are; a concept- and product experimentation with visitors, arrangements to socialize (evening specials, auctions, and exhibitions), store-in-store concepts, arranged meeting places for peers and close connection with municipality and property owners. Further, this work points to four main entrepreneurial venture creation outcomes derived from the core anchor event activities; (1) venture attention, (2) venture idea-realization, (3) venture collaboration, and (4) venture extension. Our findings show that, depending on which anchor event activities are applied, the outcomes vary. Theoretically, this study offers two main implications. First, anchor event activities are both discovered and created, following the logic of causation, at the same time as being experimental, based on “learning by doing” principles of effectuation during the execution. Second, our research enriches prior studies on venture creation as a process. In this work, entrepreneurial venture creation activities and outcomes are understood through pop-up stores as on-site anchor event arenas, particularly suitable for interactivity-based research requested by the entrepreneurship field. This study also reveals important managerial implications, such as that entrepreneurs should allow themselves to find creative physical venture creation arenas (e.g., pop-up stores, showrooms), as well as collaborate with partners when discovering and creating concepts and activities based on new ideas. In this way, they allow themselves to both strategically plan for- and continually experiment with their venture.Keywords: anchor event, interactivity-based research, pop-up store, entrepreneurial venture creation
Procedia PDF Downloads 91133 Numerical Analysis of the Computational Fluid Dynamics of Co-Digestion in a Large-Scale Continuous Stirred Tank Reactor
Authors: Sylvana A. Vega, Cesar E. Huilinir, Carlos J. Gonzalez
Abstract:
Co-digestion in anaerobic biodigesters is a technology improving hydrolysis by increasing methane generation. In the present study, the dimensional computational fluid dynamics (CFD) is numerically analyzed using Ansys Fluent software for agitation in a full-scale Continuous Stirred Tank Reactor (CSTR) biodigester during the co-digestion process. For this, a rheological study of the substrate is carried out, establishing rotation speeds of the stirrers depending on the microbial activity and energy ranges. The substrate is organic waste from industrial sources of sanitary water, butcher, fishmonger, and dairy. Once the rheological behavior curves have been obtained, it is obtained that it is a non-Newtonian fluid of the pseudoplastic type, with a solids rate of 12%. In the simulation, the rheological results of the fluid are considered, and the full-scale CSTR biodigester is modeled. It was coupling the second-order continuity differential equations, the three-dimensional Navier Stokes, the power-law model for non-Newtonian fluids, and three turbulence models: k-ε RNG, k-ε Realizable, and RMS (Reynolds Stress Model), for a 45° tilt vane impeller. It is simulated for three minutes since it is desired to study an intermittent mixture with a saving benefit of energy consumed. The results show that the absolute errors of the power number associated with the k-ε RNG, k-ε Realizable, and RMS models were 7.62%, 1.85%, and 5.05%, respectively, the numbers of power obtained from the analytical-experimental equation of Nagata. The results of the generalized Reynolds number show that the fluid dynamics have a transition-turbulent flow regime. Concerning the Froude number, the result indicates there is no need to implement baffles in the biodigester design, and the power number provides a steady trend close to 1.5. It is observed that the levels of design speeds within the biodigester are approximately 0.1 m/s, which are speeds suitable for the microbial community, where they can coexist and feed on the substrate in co-digestion. It is concluded that the model that more accurately predicts the behavior of fluid dynamics within the reactor is the k-ε Realizable model. The flow paths obtained are consistent with what is stated in the referenced literature, where the 45° inclination PBT impeller is the right type of agitator to keep particles in suspension and, in turn, increase the dispersion of gas in the liquid phase. If a 24/7 complete mix is considered under stirred agitation, with a plant factor of 80%, 51,840 kWh/year are estimated. On the contrary, if intermittent agitations of 3 min every 15 min are used under the same design conditions, reduce almost 80% of energy costs. It is a feasible solution to predict the energy expenditure of an anaerobic biodigester CSTR. It is recommended to use high mixing intensities, at the beginning and end of the joint phase acetogenesis/methanogenesis. This high intensity of mixing, in the beginning, produces the activation of the bacteria, and once reaching the end of the Hydraulic Retention Time period, it produces another increase in the mixing agitations, favoring the final dispersion of the biogas that may be trapped in the biodigester bottom.Keywords: anaerobic co-digestion, computational fluid dynamics, CFD, net power, organic waste
Procedia PDF Downloads 114132 MEIOSIS: Museum Specimens Shed Light In Biodiversity Shrinkage
Authors: Zografou Konstantina, Anagnostellis Konstantinos, Brokaki Marina, Kaltsouni Eleftheria, Dimaki Maria, Kati Vassiliki
Abstract:
Body size is crucial to ecology, influencing everything from individual reproductive success to the dynamics of communities and ecosystems. Understanding how temperature affects variations in body size is vital for both theoretical and practical purposes, as changes in size can modify trophic interactions by altering predator-prey size ratios and changing the distribution and transfer of biomass, which ultimately impacts food web stability and ecosystem functioning. Notably, a decrease in body size is frequently mentioned as the third ‘universal’ response to climate warming, alongside shifts in distribution and changes in phenology. This trend is backed by ecological theories like the temperature-size rule (TSR) and Bergmann's rule, which have been observed in numerous species, indicating that many species are likely to shrink in size as temperatures rise. However, the thermal responses related to body size are still contradictory and further exploration is needed. To tackle this challenge, we developed the MEIOSIS project, aimed at providing valuable insights into the relationship between the body size of species, species’ traits, environmental factors and their response to climate change. We combined a digitized collection of butterflies from the Swiss Federal Institute of Technology in Zürich with our newly digitized butterfly collection from Goulandris Natural History Museum in Greece to analyze trends in time. For a total of 23868 images, the length of the right forewing was measured using ImageJ software. Each forewing was measured from the point at which the wing meets the thorax to the apex of the wing. The forewing length of museum specimens has been shown to have a strong correlation with wing surface area and has been utilized in prior studies as a proxy for overall body size. Temperature data corresponding to the years of collection were also incorporated into the datasets. A second dataset was generated when a custom computer vision tool was implemented for the automated morphological measuring of samples for the digitized collection in Zürich. Using the second dataset, we corrected manual measurements with ImageJ and a final dataset containing 31922 samples was used in analysis. Setting time as a smoother variable, species identity as a random factor and the length of right-wing size (as a proxy for body size) as the response variable, we ran a global model for a maximum period of 170 years (1840 – 2010). We also constructed individual models for each family (Pieridae, Lycaenidae, Hesperiidae, Nymphalidae, Papilionidae). All models confirmed our initial hypothesis and resulted in a decreasing trend of the wing length over the years. We expect that this first output can be provided as basic data for the next challenge, i.e., to identify the ecological traits that influence species' temperature-size responses, enabling us to predict the direction and intensity of a species' reaction to rising temperatures more accurately.Keywords: butterflies, shrinking body size, museum specimens, climate change
Procedia PDF Downloads 10131 Development of PCL/Chitosan Core-Shell Electrospun Structures
Authors: Hilal T. Sasmazel, Seda Surucu
Abstract:
Skin tissue engineering is a promising field for the treatment of skin defects using scaffolds. This approach involves the use of living cells and biomaterials to restore, maintain, or regenerate tissues and organs in the body by providing; (i) larger surface area for cell attachment, (ii) proper porosity for cell colonization and cell to cell interaction, and (iii) 3-dimensionality at macroscopic scale. Recent studies on this area mainly focus on fabrication of scaffolds that can closely mimic the natural extracellular matrix (ECM) for creation of tissue specific niche-like environment at the subcellular scale. Scaffolds designed as ECM-like architectures incorporating into the host with minimal scarring/pain and facilitate angiogenesis. This study is related to combining of synthetic PCL and natural chitosan polymers to form 3D PCL/Chitosan core-shell structures for skin tissue engineering applications. Amongst the polymers used in tissue engineering, natural polymer chitosan and synthetic polymer poly(ε-caprolactone) (PCL) are widely preferred in the literature. Chitosan has been among researchers for a very long time because of its superior biocompatibility and structural resemblance to the glycosaminoglycan of bone tissue. However, the low mechanical flexibility and limited biodegradability properties reveals the necessity of using this polymer in a composite structure. On the other hand, PCL is a versatile polymer due to its low melting point (60°C), ease of processability, degradability with non-enzymatic processes (hydrolysis) and good mechanical properties. Nevertheless, there are also several disadvantages of PCL such as its hydrophobic structure, limited bio-interaction and susceptibility to bacterial biodegradation. Therefore, it became crucial to use both of these polymers together as a hybrid material in order to overcome the disadvantages of both polymers and combine advantages of those. The scaffolds here were fabricated by using electrospinning technique and the characterizations of the samples were done by contact angle (CA) measurements, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-Ray Photoelectron spectroscopy (XPS). Additionally, gas permeability test, mechanical test, thickness measurement and PBS absorption and shrinkage tests were performed for all type of scaffolds (PCL, chitosan and PCL/chitosan core-shell). By using ImageJ launcher software program (USA) from SEM photographs the average inter-fiber diameter values were calculated as 0.717±0.198 µm for PCL, 0.660±0.070 µm for chitosan and 0.412±0.339 µm for PCL/chitosan core-shell structures. Additionally, the average inter-fiber pore size values exhibited decrease of 66.91% and 61.90% for the PCL and chitosan structures respectively, compare to PCL/chitosan core-shell structures. TEM images proved that homogenous and continuous bead free core-shell fibers were obtained. XPS analysis of the PCL/chitosan core-shell structures exhibited the characteristic peaks of PCL and chitosan polymers. Measured average gas permeability value of produced PCL/chitosan core-shell structure was determined 2315±3.4 g.m-2.day-1. In the future, cell-material interactions of those developed PCL/chitosan core-shell structures will be carried out with L929 ATCC CCL-1 mouse fibroblast cell line. Standard MTT assay and microscopic imaging methods will be used for the investigation of the cell attachment, proliferation and growth capacities of the developed materials.Keywords: chitosan, coaxial electrospinning, core-shell, PCL, tissue scaffold
Procedia PDF Downloads 481130 Stuck Spaces as Moments of Learning: Uncovering Threshold Concepts in Teacher Candidate Experiences of Teaching in Inclusive Classrooms
Authors: Joy Chadwick
Abstract:
There is no doubt that classrooms of today are more complex and diverse than ever before. Preparing teacher candidates to meet these challenges is essential to ensure the retention of teachers within the profession and to ensure that graduates begin their teaching careers with the knowledge and understanding of how to effectively meet the diversity of students they will encounter. Creating inclusive classrooms requires teachers to have a repertoire of effective instructional skills and strategies. Teachers must also have the mindset to embrace diversity and value the uniqueness of individual students in their care. This qualitative study analyzed teacher candidates' experiences as they completed a fourteen-week teaching practicum while simultaneously completing a university course focused on inclusive pedagogy. The research investigated the challenges and successes teacher candidates had in navigating the translation of theory related to inclusive pedagogy into their teaching practice. Applying threshold concept theory as a framework, the research explored the troublesome concepts, liminal spaces, and transformative experiences as connected to inclusive practices. Threshold concept theory suggests that within all disciplinary fields, there exists particular threshold concepts that serve as gateways or portals into previously inaccessible ways of thinking and practicing. It is in these liminal spaces that conceptual shifts in thinking and understanding and deep learning can occur. The threshold concept framework provided a lens to examine teacher candidate struggles and successes with the inclusive education course content and the application of this content to their practicum experiences. A qualitative research approach was used, which included analyzing twenty-nine course reflective journals and six follow up one-to-one semi structured interviews. The journals and interview transcripts were coded and themed using NVivo software. Threshold concept theory was then applied to the data to uncover the liminal or stuck spaces of learning and the ways in which the teacher candidates navigated those challenging places of teaching. The research also sought to uncover potential transformative shifts in teacher candidate understanding as connected to teaching in an inclusive classroom. The findings suggested that teacher candidates experienced difficulties when they did not feel they had the knowledge, skill, or time to meet the needs of the students in the way they envisioned they should. To navigate the frustration of this thwarted vision, they relied on present and previous course content and experiences, collaborative work with other teacher candidates and their mentor teachers, and a proactive approach to planning for students. Transformational shifts were most evident in their ability to reframe their perceptions of children from a deficit or disability lens to a strength-based belief in the potential of students. It was evident that through their course work and practicum experiences, their beliefs regarding struggling students shifted as they saw the value of embracing neurodiversity, the importance of relationships, and planning for and teaching through a strength-based approach. Research findings have implications for teacher education programs and for understanding threshold concepts theory as connected to practice-based learning experiences.Keywords: inclusion, inclusive education, liminal space, teacher education, threshold concepts, troublesome knowledge
Procedia PDF Downloads 79129 EEG and DC-Potential Level Сhanges in the Elderly
Authors: Irina Deputat, Anatoly Gribanov, Yuliya Dzhos, Alexandra Nekhoroshkova, Tatyana Yemelianova, Irina Bolshevidtseva, Irina Deryabina, Yana Kereush, Larisa Startseva, Tatyana Bagretsova, Irina Ikonnikova
Abstract:
In the modern world the number of elderly people increases. Preservation of functionality of an organism in the elderly becomes very important now. During aging the higher cortical functions such as feelings, perception, attention, memory, and ideation are gradual decrease. It is expressed in the rate of information processing reduction, volume of random access memory loss, ability to training and storing of new information decrease. Perspective directions in studying of aging neurophysiological parameters are brain imaging: computer electroencephalography, neuroenergy mapping of a brain, and also methods of studying of a neurodynamic brain processes. Research aim – to study features of a brain aging in elderly people by electroencephalogram (EEG) and the DC-potential level. We examined 130 people aged 55 - 74 years that did not have psychiatric disorders and chronic states in a decompensation stage. EEG was recorded with a 128-channel GES-300 system (USA). EEG recordings are collected while the participant sits at rest with their eyes closed for 3 minutes. For a quantitative assessment of EEG we used the spectral analysis. The range was analyzed on delta (0,5–3,5 Hz), a theta - (3,5–7,0 Hz), an alpha 1-(7,0–11,0 Hz) an alpha 2-(11–13,0 Hz), beta1-(13–16,5 Hz) and beta2-(16,5–20 Hz) ranges. In each frequency range spectral power was estimated. The 12-channel hardware-software diagnostic ‘Neuroenergometr-KM’ complex was applied for registration, processing and the analysis of a brain constant potentials level. The DC-potential level registered in monopolar leads. It is revealed that the EEG of elderly people differ in higher rates of spectral power in the range delta (р < 0,01) and a theta - (р < 0,05) rhythms, especially in frontal areas in aging. By results of the comparative analysis it is noted that elderly people 60-64 aged differ in higher values of spectral power alfa-2 range in the left frontal and central areas (р < 0,05) and also higher values beta-1 range in frontal and parieto-occipital areas (р < 0,05). Study of a brain constant potential level distribution revealed increase of total energy consumption on the main areas of a brain. In frontal leads we registered the lowest values of constant potential level. Perhaps it indicates decrease in an energy metabolism in this area and difficulties of executive functions. The comparative analysis of a potential difference on the main assignments testifies to unevenness of a lateralization of a brain functions at elderly people. The results of a potential difference between right and left hemispheres testify to prevalence of the left hemisphere activity. Thus, higher rates of functional activity of a cerebral cortex are peculiar to people of early advanced age (60-64 years) that points to higher reserve opportunities of central nervous system. By 70 years there are age changes of a cerebral power exchange and level of electrogenesis of a brain which reflect deterioration of a condition of homeostatic mechanisms of self-control and the program of processing of the perceptual data current flow.Keywords: brain, DC-potential level, EEG, elderly people
Procedia PDF Downloads 484128 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing
Authors: Ahmed Elaksher, Islam Omar
Abstract:
Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition
Procedia PDF Downloads 63127 Analysis of Elastic-Plastic Deformation of Reinforced Concrete Shear-Wall Structures under Earthquake Excitations
Authors: Oleg Kabantsev, Karomatullo Umarov
Abstract:
The engineering analysis of earthquake consequences demonstrates a significantly different level of damage to load-bearing systems of different types. Buildings with reinforced concrete columns and separate shear-walls receive the highest level of damage. Traditional methods for predicting damage under earthquake excitations do not provide an answer to the question about the reasons for the increased vulnerability of reinforced concrete frames with shear-walls bearing systems. Thus, the study of the problem of formation and accumulation of damages in the structures reinforced concrete frame with shear-walls requires the use of new methods of assessment of the stress-strain state, as well as new approaches to the calculation of the distribution of forces and stresses in the load-bearing system based on account of various mechanisms of elastic-plastic deformation of reinforced concrete columns and walls. The results of research into the processes of non-linear deformation of structures with a transition to destruction (collapse) will allow to substantiate the characteristics of limit states of various structures forming an earthquake-resistant load-bearing system. The research of elastic-plastic deformation processes of reinforced concrete structures of frames with shear-walls is carried out on the basis of experimentally established parameters of limit deformations of concrete and reinforcement under dynamic excitations. Limit values of deformations are defined for conditions under which local damages of the maximum permissible level are formed in constructions. The research is performed by numerical methods using ETABS software. The research results indicate that under earthquake excitations, plastic deformations of various levels are formed in various groups of elements of the frame with the shear-wall load-bearing system. During the main period of seismic effects in the shear-wall elements of the load-bearing system, there are insignificant volumes of plastic deformations, which are significantly lower than the permissible level. At the same time, plastic deformations are formed in the columns and do not exceed the permissible value. At the final stage of seismic excitations in shear-walls, the level of plastic deformations reaches values corresponding to the plasticity coefficient of concrete , which is less than the maximum permissible value. Such volume of plastic deformations leads to an increase in general deformations of the bearing system. With the specified parameters of the deformation of the shear-walls in concrete columns, plastic deformations exceeding the limiting values develop, which leads to the collapse of such columns. Based on the results presented in this study, it can be concluded that the application seismic-force-reduction factor, common for the all load-bearing system, does not correspond to the real conditions of formation and accumulation of damages in elements of the load-bearing system. Using a single coefficient of seismic-force-reduction factor leads to errors in predicting the seismic resistance of reinforced concrete load-bearing systems. In order to provide the required level of seismic resistance buildings with reinforced concrete columns and separate shear-walls, it is necessary to use values of the coefficient of seismic-force-reduction factor differentiated by types of structural groups.1Keywords: reinforced concrete structures, earthquake excitation, plasticity coefficients, seismic-force-reduction factor, nonlinear dynamic analysis
Procedia PDF Downloads 205126 Evaluating the Accuracy of Biologically Relevant Variables Generated by ClimateAP
Authors: Jing Jiang, Wenhuan XU, Lei Zhang, Shiyi Zhang, Tongli Wang
Abstract:
Climate data quality significantly affects the reliability of ecological modeling. In the Asia Pacific (AP) region, low-quality climate data hinders ecological modeling. ClimateAP, a software developed in 2017, generates high-quality climate data for the AP region, benefiting researchers in forestry and agriculture. However, its adoption remains limited. This study aims to confirm the validity of biologically relevant variable data generated by ClimateAP during the normal climate period through comparison with the currently available gridded data. Climate data from 2,366 weather stations were used to evaluate the prediction accuracy of ClimateAP in comparison with the commonly used gridded data from WorldClim1.4. Univariate regressions were applied to 48 monthly biologically relevant variables, and the relationship between the observational data and the predictions made by ClimateAP and WorldClim was evaluated using Adjusted R-Squared and Root Mean Squared Error (RMSE). Locations were categorized into mountainous and flat landforms, considering elevation, slope, ruggedness, and Topographic Position Index. Univariate regressions were then applied to all biologically relevant variables for each landform category. Random Forest (RF) models were implemented for the climatic niche modeling of Cunninghamia lanceolata. A comparative analysis of the prediction accuracies of RF models constructed with distinct climate data sources was conducted to evaluate their relative effectiveness. Biologically relevant variables were obtained from three unpublished Chinese meteorological datasets. ClimateAPv3.0 and WorldClim predictions were obtained from weather station coordinates and WorldClim1.4 rasters, respectively, for the normal climate period of 1961-1990. Occurrence data for Cunninghamia lanceolata came from integrated biodiversity databases with 3,745 unique points. ClimateAP explains a minimum of 94.74%, 97.77%, 96.89%, and 94.40% of monthly maximum, minimum, average temperature, and precipitation variances, respectively. It outperforms WorldClim in 37 biologically relevant variables with lower RMSE values. ClimateAP achieves higher R-squared values for the 12 monthly minimum temperature variables and consistently higher Adjusted R-squared values across all landforms for precipitation. ClimateAP's temperature data yields lower Adjusted R-squared values than gridded data in high-elevation, rugged, and mountainous areas but achieves higher values in mid-slope drainages, plains, open slopes, and upper slopes. Using ClimateAP improves the prediction accuracy of tree occurrence from 77.90% to 82.77%. The biologically relevant climate data produced by ClimateAP is validated based on evaluations using observations from weather stations. The use of ClimateAP leads to an improvement in data quality, especially in non-mountainous regions. The results also suggest that using biologically relevant variables generated by ClimateAP can slightly enhance climatic niche modeling for tree species, offering a better understanding of tree species adaptation and resilience compared to using gridded data.Keywords: climate data validation, data quality, Asia pacific climate, climatic niche modeling, random forest models, tree species
Procedia PDF Downloads 68125 Ethical Decision-Making in AI and Robotics Research: A Proposed Model
Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet
Abstract:
Researchers in the fields of AI and Robotics frequently encounter ethical dilemmas throughout their research endeavors. Various ethical challenges have been pinpointed in the existing literature, including biases and discriminatory outcomes, diffusion of responsibility, and a deficit in transparency within AI operations. This research aims to pinpoint these ethical quandaries faced by researchers and shed light on the mechanisms behind ethical decision-making in the research process. By synthesizing insights from existing literature and acknowledging prevalent shortcomings, such as overlooking the heterogeneous nature of decision-making, non-accumulative results, and a lack of consensus on numerous factors due to limited empirical research, the objective is to conceptualize and validate a model. This model will incorporate influences from individual perspectives and situational contexts, considering potential moderating factors in the ethical decision-making process. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focusing on collaborative robotics for several months. Subsequently, semi-structured interviews with 16 team members were conducted. The entire process took place during the first semester of 2023. Observations were analyzed using an analysis grid, and the interviews underwent thematic analysis using Nvivo software. An initial finding involves identifying the ethical challenges that AI/robotics researchers confront, underlining a disparity between practical applications and theoretical considerations regarding ethical dilemmas in the realm of AI. Notably, researchers in AI prioritize the publication and recognition of their work, sparking the genesis of these ethical inquiries. Furthermore, this article illustrated that researchers tend to embrace a consequentialist ethical framework concerning safety (for humans engaging with robots/AI), worker autonomy in relation to robots, and the societal implications of labor (can robots displace jobs?). A second significant contribution entails proposing a model for ethical decision-making within the AI/Robotics research sphere. The model proposed adopts a process-oriented approach, delineating various research stages (topic proposal, hypothesis formulation, experimentation, conclusion, and valorization). Across these stages and the ethical queries, they entail, a comprehensive four-point comprehension of ethical decision-making is presented: recognition of the moral quandary; moral judgment, signifying the decision-maker's aptitude to discern the morally righteous course of action; moral intention, reflecting the ability to prioritize moral values above others; and moral behavior, denoting the application of moral intention to the situation. Variables such as political inclinations ((anti)-capitalism, environmentalism, veganism) seem to wield significant influence. Moreover, age emerges as a noteworthy moderating factor. AI and robotics researchers are continually confronted with ethical dilemmas during their research endeavors, necessitating thoughtful decision-making. The contribution involves introducing a contextually tailored model, derived from meticulous observations and insightful interviews, enabling the identification of factors that shape ethical decision-making at different stages of the research process.Keywords: ethical decision making, artificial intelligence, robotics, research
Procedia PDF Downloads 79124 Identification of a Panel of Epigenetic Biomarkers for Early Detection of Hepatocellular Carcinoma in Blood of Individuals with Liver Cirrhosis
Authors: Katarzyna Lubecka, Kirsty Flower, Megan Beetch, Lucinda Kurzava, Hannah Buvala, Samer Gawrieh, Suthat Liangpunsakul, Tracy Gonzalez, George McCabe, Naga Chalasani, James M. Flanagan, Barbara Stefanska
Abstract:
Hepatocellular carcinoma (HCC), the most prevalent type of primary liver cancer, is the second leading cause of cancer death worldwide. Late onset of clinical symptoms in HCC results in late diagnosis and poor disease outcome. Approximately 85% of individuals with HCC have underlying liver cirrhosis. However, not all cirrhotic patients develop cancer. Reliable early detection biomarkers that can distinguish cirrhotic patients who will develop cancer from those who will not are urgently needed and could increase the cure rate from 5% to 80%. We used Illumina-450K microarray to test whether blood DNA, an easily accessible source of DNA, bear site-specific changes in DNA methylation in response to HCC before diagnosis with conventional tools (pre-diagnostic). Top 11 differentially methylated sites were selected for validation by pyrosequencing. The diagnostic potential of the 11 pyrosequenced probes was tested in blood samples from a prospective cohort of cirrhotic patients. We identified 971 differentially methylated CpG sites in pre-diagnostic HCC cases as compared with healthy controls (P < 0.05, paired Wilcoxon test, ICC ≥ 0.5). Nearly 76% of differentially methylated CpG sites showed lower levels of methylation in cases vs. controls (P = 2.973E-11, Wilcoxon test). Classification of the CpG sites according to their location relative to CpG islands and transcription start site revealed that those hypomethylated loci are located in regulatory regions important for gene transcription such as CpG island shores, promoters, and 5’UTR at higher frequency than hypermethylated sites. Among 735 CpG sites hypomethylated in cases vs. controls, 482 sites were assigned to gene coding regions whereas 236 hypermethylated sites corresponded to 160 genes. Bioinformatics analysis using GO, KEGG and DAVID knowledgebase indicate that differentially methylated CpG sites are located in genes associated with functions that are essential for gene transcription, cell adhesion, cell migration, and regulation of signal transduction pathways. Taking into account the magnitude of the difference, statistical significance, location, and consistency across the majority of matched pairs case-control, we selected 11 CpG loci corresponding to 10 genes for further validation by pyrosequencing. We established that methylation of CpG sites within 5 out of those 10 genes distinguish cirrhotic patients who subsequently developed HCC from those who stayed cancer free (cirrhotic controls), demonstrating potential as biomarkers of early detection in populations at risk. The best predictive value was detected for CpGs located within BARD1 (AUC=0.70, asymptotic significance ˂0.01). Using an additive logistic regression model, we further showed that 9 CpG loci within those 5 genes, that were covered in pyrosequenced probes, constitute a panel with high diagnostic accuracy (AUC=0.887; 95% CI:0.80-0.98). The panel was able to distinguish pre-diagnostic cases from cirrhotic controls free of cancer with 88% sensitivity at 70% specificity. Using blood as a minimally invasive material and pyrosequencing as a straightforward quantitative method, the established biomarker panel has high potential to be developed into a routine clinical test after validation in larger cohorts. This study was supported by Showalter Trust, American Cancer Society (IRG#14-190-56), and Purdue Center for Cancer Research (P30 CA023168) granted to BS.Keywords: biomarker, DNA methylation, early detection, hepatocellular carcinoma
Procedia PDF Downloads 304123 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control
Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul
Abstract:
Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning
Procedia PDF Downloads 149122 Measurement System for Human Arm Muscle Magnetic Field and Grip Strength
Authors: Shuai Yuan, Minxia Shi, Xu Zhang, Jianzhi Yang, Kangqi Tian, Yuzheng Ma
Abstract:
The precise measurement of muscle activities is essential for understanding the function of various body movements. This work aims to develop a muscle magnetic field signal detection system based on mathematical analysis. Medical research has underscored that early detection of muscle atrophy, coupled with lifestyle adjustments such as dietary control and increased exercise, can significantly enhance muscle-related diseases. Currently, surface electromyography (sEMG) is widely employed in research as an early predictor of muscle atrophy. Nonetheless, the primary limitation of using sEMG to forecast muscle strength is its inability to directly measure the signals generated by muscles. Challenges arise from potential skin-electrode contact issues due to perspiration, leading to inaccurate signals or even signal loss. Additionally, resistance and phase are significantly impacted by adipose layers. The recent emergence of optically pumped magnetometers introduces a fresh avenue for bio-magnetic field measurement techniques. These magnetometers possess high sensitivity and obviate the need for a cryogenic environment unlike superconducting quantum interference devices (SQUIDs). They detect muscle magnetic field signals in the range of tens to thousands of femtoteslas (fT). The utilization of magnetometers for capturing muscle magnetic field signals remains unaffected by issues of perspiration and adipose layers. Since their introduction, optically pumped atomic magnetometers have found extensive application in exploring the magnetic fields of organs such as cardiac and brain magnetism. The optimal operation of these magnetometers necessitates an environment with an ultra-weak magnetic field. To achieve such an environment, researchers usually utilize a combination of active magnetic compensation technology with passive magnetic shielding technology. Passive magnetic shielding technology uses a magnetic shielding device built with high permeability materials to attenuate the external magnetic field to a few nT. Compared with more layers, the coils that can generate a reverse magnetic field to precisely compensate for the residual magnetic fields are cheaper and more flexible. To attain even lower magnetic fields, compensation coils designed by Biot-Savart law are involved to generate a counteractive magnetic field to eliminate residual magnetic fields. By solving the magnetic field expression of discrete points in the target region, the parameters that determine the current density distribution on the plane can be obtained through the conventional target field method. The current density is obtained from the partial derivative of the stream function, which can be represented by the combination of trigonometric functions. Optimization algorithms in mathematics are introduced into coil design to obtain the optimal current density distribution. A one-dimensional linear regression analysis was performed on the collected data, obtaining a coefficient of determination R2 of 0.9349 with a p-value of 0. This statistical result indicates a stable relationship between the peak-to-peak value (PPV) of the muscle magnetic field signal and the magnitude of grip strength. This system is expected to be a widely used tool for healthcare professionals to gain deeper insights into the muscle health of their patients.Keywords: muscle magnetic signal, magnetic shielding, compensation coils, trigonometric functions.
Procedia PDF Downloads 56121 The Governance of Net-Zero Emission Urban Bus Transitions in the United Kingdom: Insight from a Transition Visioning Stakeholder Workshop
Authors: Iraklis Argyriou
Abstract:
The transition to net-zero emission urban bus (ZEB) systems is receiving increased attention in research and policymaking throughout the globe. Most studies in this area tend to address techno-economic aspects and the perspectives of a narrow group of stakeholders, while they largely overlook analysis of current bus system dynamics. This offers limited insight into the types of ZEB governance challenges and opportunities that are encountered in real-world contexts, as well as into some of the immediate actions that need to be taken to set off the transition over the longer term. This research offers a multi-stakeholder perspective into both the technical and non-technical factors that influence ZEB transitions within a particular context, the UK. It does so by drawing from a recent transition visioning stakeholder workshop (June 2023) with key public, private and civic actors of the urban bus transportation system. Using NVivo software to qualitatively analyze the workshop discussions, the research examines the key technological and funding aspects, as well as the short-term actions (over the next five years), that need to be addressed for supporting the ZEB transition in UK cities. It finds that ZEB technology has reached a mature stage (i.e., high efficiency of batteries, motors and inverters), but important improvements can be pursued through greater control and integration of ZEB technological components and systems. In this regard, telemetry, predictive maintenance and adaptive control strategies pertinent to the performance and operation of ZEB vehicles have a key role to play in the techno-economic advancement of the transition. Yet, more pressing gaps were identified in the current ZEB funding regime. Whereas the UK central government supports greater ZEB adoption through a series of grants and subsidies, the scale of the funding and its fragmented nature do not match the needs for a UK-wide transition. Funding devolution arrangements (i.e., stable funding settlement deals between the central government and the devolved administrations/local authorities), as well as locally-driven schemes (i.e., congestion charging/workplace parking levy), could then enhance the financial prospects of the transition. As for short-term action, three areas were identified as critical: (1) the creation of whole value chains around the supply, use and recycling of ZEB components; (2) the ZEB retrofitting of existing fleets; and (3) integrated transportation that prioritizes buses as a first-choice, convenient and reliable mode while it simultaneously reduces car dependency in urban areas. Taken together, the findings point to the need for place-based transition approaches that create a viable techno-economic ecosystem for ZEB development but at the same time adopt a broader governance perspective beyond a ‘net-zero’ and ‘bus sectoral’ focus. As such, multi-actor collaborations and the coordination of wider resources and agency, both vertically across institutional scales and horizontally across transport, energy and urban planning, become fundamental features of comprehensive ZEB responses. The lessons from the UK case can inform a broader body of empirical contextual knowledge of ZEB transition governance within domestic political economies of public transportation.Keywords: net-zero emission transition, stakeholders, transition governance, UK, urban bus transportation
Procedia PDF Downloads 75120 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators
Authors: N. Naz, A. D. Domenico, M. N. Huda
Abstract:
Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator
Procedia PDF Downloads 90119 Gamification of eHealth Business Cases to Enhance Rich Learning Experience
Authors: Kari Björn
Abstract:
Introduction of games has expanded the application area of computer-aided learning tools to wide variety of age groups of learners. Serious games engage the learners into a real-world -type of simulation and potentially enrich the learning experience. Institutional background of a Bachelor’s level engineering program in Information and Communication Technology is introduced, with detailed focus on one of its majors, Health Technology. As part of a Customer Oriented Software Application thematic semester, one particular course of “eHealth Business and Solutions” is described and reflected in a gamified framework. Learning a consistent view into vast literature of business management, strategies, marketing and finance in a very limited time enforces selection of topics relevant to the industry. Health Technology is a novel and growing industry with a growing sector in consumer wearable devices and homecare applications. The business sector is attracting new entrepreneurs and impatient investor funds. From engineering education point of view the sector is driven by miniaturizing electronics, sensors and wireless applications. However, the market is highly consumer-driven and usability, safety and data integrity requirements are extremely high. When the same technology is used in analysis or treatment of patients, very strict regulatory measures are enforced. The paper introduces a course structure using gamification as a tool to learn the most essential in a new market: customer value proposition design, followed by a market entry game. Students analyze the existing market size and pricing structure of eHealth web-service market and enter the market as a steering group of their company, competing against the legacy players and with each other. The market is growing but has its rules of demand and supply balance. New products can be developed with an R&D-investment, and targeted to market with unique quality- and price-combinations. Product cost structure can be improved by investing to enhanced production capacity. Investments can be funded optionally by foreign capital. Students make management decisions and face the dynamics of the market competition in form of income statement and balance sheet after each decision cycle. The focus of the learning outcome is to understand customer value creation to be the source of cash flow. The benefit of gamification is to enrich the learning experience on structure and meaning of financial statements. The paper describes the gamification approach and discusses outcomes after two course implementations. Along the case description of learning challenges, some unexpected misconceptions are noted. Improvements of the game or the semi-gamified teaching pedagogy are discussed. The case description serves as an additional support to new game coordinator, as well as helps to improve the method. Overall, the gamified approach has helped to engage engineering student to business studies in an energizing way.Keywords: engineering education, integrated curriculum, learning experience, learning outcomes
Procedia PDF Downloads 240118 Solid State Fermentation: A Technological Alternative for Enriching Bioavailability of Underutilized Crops
Authors: Vipin Bhandari, Anupama Singh, Kopal Gupta
Abstract:
Solid state fermentation, an eminent bioconversion technique for converting many biological substrates into a value-added product, has proven its role in the biotransformation of crops by nutritionally enriching them. Hence, an effort was made for nutritional enhancement of underutilized crops viz. barnyard millet, amaranthus and horse gram based composite flour using SSF. The grains were given pre-treatments before fermentation and these pre-treatments proved quite effective in diminishing the level of antinutrients in grains and in improving their nutritional characteristics. The present study deals with the enhancement of nutritional characteristics of underutilized crops viz. barnyard millet, amaranthus and horsegram based composite flour using solid state fermentation (SSF) as the principle bioconversion technique to convert the composite flour substrate into a nutritionally enriched value added product. Response surface methodology was used to design the experiments. The variables selected for the fermentation experiments were substrate particle size, substrate blend ratio, fermentation time, fermentation temperature and moisture content having three levels of each. Seventeen designed experiments were conducted randomly to find the effect of these variables on microbial count, reducing sugar, pH, total sugar, phytic acid and water absorption index. The data from all experiments were analyzed using Design Expert 8.0.6 and the response functions were developed using multiple regression analysis and second order models were fitted for each response. Results revealed that pretreatments proved quite handful in diminishing the level of antinutrients and thus enhancing the nutritional value of the grains appreciably, for instance, there was about 23% reduction in phytic acid levels after decortication of barnyard millet. The carbohydrate content of the decorticated barnyard millet increased to 81.5% from initial value of 65.2%. Similarly popping and puffing of horsegram and amaranthus respectively greatly reduced the trypsin inhibitor activity. Puffing of amaranthus also reduced the tannin content appreciably. Bacillus subtilis was used as the inoculating specie since it is known to produce phytases in solid state fermentation systems. These phytases remarkably reduce the phytic acid content which acts as a major antinutritional factor in food grains. Results of solid state fermentation experiments revealed that phytic acid levels reduced appreciably when fermentation was allowed to continue for 72 hours at a temperature of 35°C. Particle size and substrate blend ratio also affected the responses positively. All the parameters viz. substrate particle size, substrate blend ratio, fermentation time, fermentation temperature and moisture content affected the responses namely microbial count, reducing sugar, pH, total sugar, phytic acid and water absorption index but the effect of fermentation time was found to be most significant on all the responses. Statistical analysis resulted in the optimum conditions (particle size 355µ, substrate blend ratio 50:20:30 of barnyard millet, amaranthus and horsegram respectively, fermentation time 68 hrs, fermentation temperature 35°C and moisture content 47%) for maximum reduction in phytic acid. The model F- value was found to be highly significant at 1% level of significance in case of all the responses. Hence, second order model could be fitted to predict all the dependent parameters. The effect of fermentation time was found to be most significant as compared to other variables.Keywords: composite flour, solid state fermentation, underutilized crops, cereals, fermentation technology, food processing
Procedia PDF Downloads 327117 Cancer Stem Cell-Associated Serum Proteins Obtained by Maldi TOF/TOF Mass Spectrometry in Women with Triple-Negative Breast Cancer
Authors: Javier Enciso-Benavides, Fredy Fabian, Carlos Castaneda, Luis Alfaro, Alex Choque, Aparicio Aguilar, Javier Enciso
Abstract:
Background: The use of biomarkers in breast cancer diagnosis, therapy, and prognosis has gained increasing interest. Cancer stem cells (CSCs) are a subpopulation of tumor cells that can drive tumor initiation and may cause relapse. Therefore, due to the importance of diagnosis, therapy, and prognosis, several biomarkers that characterize CSCs have been identified; however, in treatment-naïve triple-negative breast tumors, there is an urgent need to identify new biomarkers and therapeutic targets. According to this, the aim of this study was to identify serum proteins associated with cancer stem cells and pluripotency in women with triple-negative breast tumors in order to subsequently identify a biomarker for this type of breast tumor. Material and Methods: Whole blood samples from 12 women with histopathologically diagnosed triple-negative breast tumors were used after obtaining informed consent from the patient. Blood serum was obtained by conventional procedure and frozen at -80ºC. Identification of cancer stem cell-associated proteins was performed by matrix-assisted laser desorption/ionisation-assisted laser desorption/ionisation mass spectrometry (MALDI-TOF MS), protein analysis was obtained using the AB Sciex TOF/TOF™ 5800 system (AB Sciex, USA). Sequences not aligned by ProteinPilot™ software were analyzed by Protein BLAST. Results: The following proteins related to pluripotency and cancer stem cells were identified by MALDI TOF/TOF mass spectrometry: A-chain, Serpin A12 [Homo sapiens], AIEBP [Homo sapiens], Alpha-one antitrypsin, AT {internal fragment} [human, partial peptide, 20 aa] [Homo sapiens], collagen alpha 1 chain precursor variant [Homo sapiens], retinoblastoma-associated protein variant [Homo sapiens], insulin receptor, CRA_c isoform [Homo sapiens], Hydroxyisourate hydrolase [Streptomyces scopuliridis], MUCIN-6 [Macaca mulatta], Alpha-actinin-3 [Chrysochloris asiatica], Polyprotein M, CRA_d isoform, partial [Homo sapiens], Transcription factor SOX-12 [Homo sapiens]. Recommendations: The serum proteins identified in this study should be investigated in the exosome of triple-negative breast cancer stem cells and in the blood serum of women without breast cancer. Subsequently, proteins found only in the blood serum of women with triple-negative breast cancer should be identified in situ in triple-negative breast cancer tissue in order to identify a biomarker to study the evolution of this type of cancer, or that could be a therapeutic target. Conclusions: Eleven cancer stem cell-related serum proteins were identified in 12 women with triple-negative breast cancer, of which MUCIN-6, retinoblastoma-associated protein variant, transcription factor SOX-12, and collagen alpha 1 chain are the most representative and have not been studied so far in this type of breast tumor. Acknowledgement: This work was supported by Proyecto CONCYTEC–Banco Mundial “Mejoramiento y Ampliacion de los Servicios del Sistema Nacional de Ciencia Tecnología e Innovacion Tecnologica” 8682-PE (104-2018-FONDECYT-BM-IADT-AV).Keywords: triple-negative breast cancer, MALDI TOF/TOF MS, serum proteins, cancer stem cells
Procedia PDF Downloads 215