Search results for: zinc amino acid complex (ZnAA)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9106

Search results for: zinc amino acid complex (ZnAA)

376 Analytical, Numerical, and Experimental Research Approaches to Influence of Vibrations on Hydroelastic Processes in Centrifugal Pumps

Authors: Dinara F. Gaynutdinova, Vladimir Ya Modorsky, Nikolay A. Shevelev

Abstract:

The problem under research is that of unpredictable modes occurring in two-stage centrifugal hydraulic pump as a result of hydraulic processes caused by vibrations of structural components. Numerical, analytical and experimental approaches are considered. A hypothesis was developed that the problem of unpredictable pressure decrease at the second stage of centrifugal pumps is caused by cavitation effects occurring upon vibration. The problem has been studied experimentally and theoretically as of today. The theoretical study was conducted numerically and analytically. Hydroelastic processes in dynamic “liquid – deformed structure” system were numerically modelled and analysed. Using ANSYS CFX program engineering analysis complex and computing capacity of a supercomputer the cavitation parameters were established to depend on vibration parameters. An influence domain of amplitudes and vibration frequencies on concentration of cavitation bubbles was formulated. The obtained numerical solution was verified using CFM program package developed in PNRPU. The package is based on a differential equation system in hyperbolic and elliptic partial derivatives. The system is solved by using one of finite-difference method options – the particle-in-cell method. The method defines the problem solution algorithm. The obtained numerical solution was verified analytically by model problem calculations with the use of known analytical solutions of in-pipe piston movement and cantilever rod end face impact. An infrastructure consisting of an experimental fast hydro-dynamic processes research installation and a supercomputer connected by a high-speed network, was created to verify the obtained numerical solutions. Physical experiments included measurement, record, processing and analysis of data for fast processes research by using National Instrument signals measurement system and Lab View software. The model chamber end face oscillated during physical experiments and, thus, loaded the hydraulic volume. The loading frequency varied from 0 to 5 kHz. The length of the operating chamber varied from 0.4 to 1.0 m. Additional loads weighed from 2 to 10 kg. The liquid column varied from 0.4 to 1 m high. Liquid pressure history was registered. The experiment showed dependence of forced system oscillation amplitude on loading frequency at various values: operating chamber geometrical dimensions, liquid column height and structure weight. Maximum pressure oscillation (in the basic variant) amplitudes were discovered at loading frequencies of approximately 1,5 kHz. These results match the analytical and numerical solutions in ANSYS and CFM.

Keywords: computing experiment, hydroelasticity, physical experiment, vibration

Procedia PDF Downloads 244
375 ReactorDesign App: An Interactive Software for Self-Directed Explorative Learning

Authors: Chia Wei Lim, Ning Yan

Abstract:

The subject of reactor design, dealing with the transformation of chemical feedstocks into more valuable products, constitutes the central idea of chemical engineering. Despite its importance, the way it is taught to chemical engineering undergraduates has stayed virtually the same over the past several decades, even as the chemical industry increasingly leans towards the use of software for the design and daily monitoring of chemical plants. As such, there has been a widening learning gap as chemical engineering graduates transition from university to the industry since they are not exposed to effective platforms that relate the fundamental concepts taught during lectures to industrial applications. While the success of technology enhanced learning (TEL) has been demonstrated in various chemical engineering subjects, TELs in the teaching of reactor design appears to focus on the simulation of reactor processes, as opposed to arguably more important ideas such as the selection and optimization of reactor configuration for different types of reactions. This presents an opportunity for us to utilize the readily available easy-to-use MATLAB App platform to create an educational tool to aid the learning of fundamental concepts of reactor design and to link these concepts to the industrial context. Here, interactive software for the learning of reactor design has been developed to narrow the learning gap experienced by chemical engineering undergraduates. Dubbed the ReactorDesign App, it enables students to design reactors involving complex design equations for industrial applications without being overly focused on the tedious mathematical steps. With the aid of extensive visualization features, the concepts covered during lectures are explicitly utilized, allowing students to understand how these fundamental concepts are applied in the industrial context and equipping them for their careers. In addition, the software leverages the easily accessible MATLAB App platform to encourage self-directed learning. It is useful for reinforcing concepts taught, complementing homework assignments, and aiding exam revision. Accordingly, students are able to identify any lapses in understanding and clarify them accordingly. In terms of the topics covered, the app incorporates the design of different types of isothermal and non-isothermal reactors, in line with the lecture content and industrial relevance. The main features include the design of single reactors, such as batch reactors (BR), continuously stirred tank reactors (CSTR), plug flow reactors (PFR), and recycle reactors (RR), as well as multiple reactors consisting of any combination of ideal reactors. A version of the app, together with some guiding questions to aid explorative learning, was released to the undergraduates taking the reactor design module. A survey was conducted to assess its effectiveness, and an overwhelmingly positive response was received, with 89% of the respondents agreeing or strongly agreeing that the app has “helped [them] with understanding the unit” and 87% of the respondents agreeing or strongly agreeing that the app “offers learning flexibility”, compared to the conventional lecture-tutorial learning framework. In conclusion, the interactive ReactorDesign App has been developed to encourage self-directed explorative learning of the subject and demonstrate the industrial applications of the taught design concepts.

Keywords: explorative learning, reactor design, self-directed learning, technology enhanced learning

Procedia PDF Downloads 93
374 Small and Medium-Sized Enterprises, Flash Flooding and Organisational Resilience Capacity: Qualitative Findings on Implications of the Catastrophic 2017 Flash Flood Event in Mandra, Greece

Authors: Antonis Skouloudis, Georgios Deligiannakis, Panagiotis Vouros, Konstantinos Evangelinos, Loannis Nikolaou

Abstract:

On November 15th, 2017, a catastrophic flash flood devastated the city of Mandra in Central Greece, resulting in 24 fatalities and extensive damages to the built environment and infrastructure. It was Greece's deadliest and most destructive flood event for the past 40 years. In this paper, we examine the consequences of this event too small and medium-sized enterprises (SMEs) operating in Mandra during the flood event, which were affected by the floodwaters to varying extents. In this context, we conducted semi-structured interviews with business owners-managers of 45 SMEs located in flood inundated areas and are still active nowadays, based on an interview guide that spanned 27 topics. The topics pertained to the disaster experience of the business and business owners-managers, knowledge and attitudes towards climate change and extreme weather, aspects of disaster preparedness and related assistance needs. Our findings reveal that the vast majority of the affected businesses experienced heavy damages in equipment and infrastructure or total destruction, which resulted in business interruption from several weeks up to several months. Assistance from relatives or friends helped for the damage repairs and business recovery, while state compensations were deemed insufficient compared to the extent of the damages. Most interviewees pinpoint flooding as one of the most critical risks, and many connect it with the climate crisis. However, they are either not willing or unable to apply property-level prevention measures in their businesses due to cost considerations or complex and cumbersome bureaucratic processes. In all cases, the business owners are fully aware of the flood hazard implications, and since the recovery from the event, they have engaged in basic mitigation measures and contingency plans in case of future flood events. Such plans include insurance contracts whenever possible (as the vast majority of the affected SMEs were uninsured at the time of the 2017 event) as well as simple relocations of critical equipment within their property. The study offers fruitful insights on latent drivers and barriers of SMEs' resilience capacity to flash flooding. In this respect, findings such as ours, highlighting tensions that underpin behavioral responses and experiences, can feed into a) bottom-up approaches for devising actionable and practical guidelines, manuals and/or standards on business preparedness to flooding, and, ultimately, b) policy-making for an enabling environment towards a flood-resilient SME sector.

Keywords: flash flood, small and medium-sized enterprises, organizational resilience capacity, disaster preparedness, qualitative study

Procedia PDF Downloads 132
373 Phenotypic and Molecular Heterogeneity Linked to the Magnesium Transporter CNNM2

Authors: Reham Khalaf-Nazzal, Imad Dweikat, Paula Gimenez, Iker Oyenarte, Alfonso Martinez-Cruz, Domonik Muller

Abstract:

Metal cation transport mediator (CNNM) gene family comprises 4 isoforms that are expressed in various human tissues. Structurally, CNNMs are complex proteins that contain an extracellular N-terminal domain preceding a DUF21 transmembrane domain, a ‘Bateman module’ and a C-terminal cNMP-binding domain. Mutations in CNNM2 cause familial dominant hypomagnesaemia. Growing evidence highlights the role of CNNM2 in neurodevelopment. Mutations in CNNM2 have been implicated in epilepsy, intellectual disability, schizophrenia, and others. In the present study, we aim to elucidate the function of CNNM2 in the developing brain. Thus, we present the genetic origin of symptoms in two family cohorts. In the first family, three siblings of a consanguineous Palestinian family in which parents are first cousins, and consanguinity ran over several generations, presented a varying degree of intellectual disability, cone-rod dystrophy, and autism spectrum disorder. Exome sequencing and segregation analysis revealed the presence of homozygous pathogenic mutation in the CNNM2 gene, the parents were heterozygous for that gene mutation. Magnesium blood levels were normal in the three children and their parents in several measurements. They had no symptoms of hypomagnesemia. The CNNM2 mutation in this family was found to locate in the CBS1 domain of the CNNM2 protein. The crystal structure of the mutated CNNM2 protein was not significantly different from the wild-type protein, and the binding of AMP or MgATP was not dramatically affected. This suggests that the CBS1 domain could be involved in pure neurodevelopmental functions independent of its magnesium-handling role, and this mutation could have affected a protein partner binding or other functions in this protein. In the second family, another autosomal dominant CNNM2 mutation was found to run in a large family with multiple individuals over three generations. All affected family members had hypomagnesemia and hypermagnesuria. Oral supplementation of magnesium did not increase the levels of magnesium in serum significantly. Some affected members of this family have defects in fine motor skills such as dyslexia and dyslalia. The detected mutation is located in the N-terminal part, which contains a signal peptide thought to be involved in the sorting and routing of the protein. In this project, we describe heterogenous clinical phenotypes related to CNNM2 mutations and protein functions. In the first family, and up to the authors’ knowledge, we report for the first time the involvement of CNNM2 in retinal photoreceptor development and function. In addition, we report the presence of a neurophenotype independent of magnesium status related to the CNNM2 protein mutation. Taking into account the different modes of inheritance and the different positions of the mutations within CNNM2 and its different structural and functional domains, it is likely that CNNM2 might be involved in a wide spectrum of neuropsychiatric comorbidities with considerable varying phenotypes.

Keywords: magnesium transport, autosomal recessive, autism, neurodevelopment, CBS domain

Procedia PDF Downloads 150
372 Two-wavelength High-energy Cr:LiCaAlF6 MOPA Laser System for Medical Multispectral Optoacoustic Tomography

Authors: Radik D. Aglyamov, Alexander K. Naumov, Alexey A. Shavelev, Oleg A. Morozov, Arsenij D. Shishkin, Yury P.Brodnikovsky, Alexander A.Karabutov, Alexander A. Oraevsky, Vadim V. Semashko

Abstract:

The development of medical optoacoustic tomography with the using human blood as endogenic contrast agent is constrained by the lack of reliable, easy-to-use and inexpensive sources of high-power pulsed laser radiation in the spectral region of 750-900 nm [1-2]. Currently used titanium-sapphire, alexandrite lasers or optical parametric light oscillators do not provide the required and stable output characteristics, they are structurally complex, and their cost is up to half the price of diagnostic optoacoustic systems. Here we are developing the lasers based on Cr:LiCaAlF6 crystals which are free of abovementioned disadvantages and provides intensive ten’s ns-range tunable laser radiation at specific absorption bands of oxy- (~840 nm) and -deoxyhemoglobin (~757 nm) in the blood. Cr:LiCAF (с=3 at.%) crystals were grown in Kazan Federal University by the vertical directional crystallization (Bridgman technique) in graphite crucibles in a fluorinating atmosphere at argon overpressure (P=1500 hPa) [3]. The laser elements have cylinder shape with the diameter of 8 mm and 90 mm in length. The direction of the optical axis of the crystal was normal to the cylinder generatrix, which provides the π-polarized laser action correspondent to maximal stimulated emission cross-section. The flat working surfaces of the active elements were polished and parallel to each other with an error less than 10”. No any antireflection coating was applied. The Q-switched master oscillator-power amplifiers laser system (MOPA) with the dual-Xenon flashlamp pumping scheme in diffuse-reflectivity close-coupled head were realized. A specially designed laser cavity, consisting of dielectric highly reflective reflectors with a 2 m-curvature radius, a flat output mirror, a polarizer and Q-switch sell, makes it possible to operate sequentially in a circle (50 ns - laser one pulse after another) at wavelengths of 757 and 840 nm. The programmable pumping system from Tomowave Laser LLC (Russia) provided independent to each pulses (up to 250 J at 180 μs) pumping to equalize the laser radiation intensity at these wavelengths. The MOPA laser operates at 10 Hz pulse repetition rate with the output energy up to 210 mJ. Taking into account the limitations associated with physiological movements and other characteristics of patient tissues, the duration of laser pulses and their energy allows molecular and functional high-contrast imaging to depths of 5-6 cm with a spatial resolution of at least 1 mm. Highly likely the further comprehensive design of laser allows improving the output properties and realizing better spatial resolution of medical multispectral optoacoustic tomography systems.

Keywords: medical optoacoustic, endogenic contrast agent, multiwavelength tunable pulse lasers, MOPA laser system

Procedia PDF Downloads 101
371 Moringa olifera Curate The Toxic Potential of CuO Nanoparticles in Oreochromis mossambicus

Authors: Farhat Jabeen, Muhammad Asad

Abstract:

The study assessed the curative potential of Moringa olifera seeds against copper oxide nanoparticles induced toxicity in Oreochromis mossambicus. In order to investigate the curative potential of M. olifera seeds, firstly we examine its chemical composition, secondary metabolites, and bioactive compounds including hydroxyl-cinnamic acids, flavanols and hydroxybenzoic acids through standard methods and high performance liquid chromatography. In current study, the potential sub-lethal toxic dose of CuO-NPs (0.12 mg/l) was investigated through pilot experiment and three non-lethal doses (low=32, medium=48 and high=96 mg/l) of M. olifera were selected on the basis of its LC50 value for O. mossambicus. The experimental fish, O. mossambicus (n=100 of approximately 20 g each) were procured from Manawan Fisheries Complex, Lahore, and acclimatized for two weeks in glass aquaria. Experiment was conducted in accordance with the guidelines of Institutional Animal Ethics Committee, Government College University Faisalabad, Pakistan. During acclimatization and experimental period, fish received the commercial fish feed at 2.5% body weight daily. In order to assess the curative effect of M. olifera against CuO NPs induced toxicity, O. mossambicus were randomly divided into five groups and were designated as control (C) without any treatment, positive control (G*) exposed to potential toxic dose of CuO-NPs at 0.12 mg/l, and three treated groups namely G1, G2, and G3 co-treated with 0.12 mg/l of CuO-NPs plus different doses of M. olifera seed extract at 32, 48, and 96 mg/l, respectively for 56 days. Fish were exposed to waterborne CuO NPs and M. olifera seed extract. CuO-NPs treatment was ceased after 28 days but the doses of M. olifera were continued for 56 days. Blood was taken after 28 and 56 days through caudal venipuncture. Liver and intestine were taken for oxidative stress and histological studies after 56 days. In M. olifera seeds, moisture contents, crude protein, lipids, carbohydrates and ash were recorded as 3.8, 37.83, 32.52, 46.12, and 7.75%, respectively on dry weight basis. Total energy was recorded as 627.36 kcal/100g. Qualitative analysis of M. olifera seeds showed the presence of terpenoids, saponins, flavonoids, alkaloids and phenolics, while its quantitative analysis showed the considerable amount of total phenolics, flavonoids, saponins, and alkaloids as 134.75, 170.15, 1.57, and 0.4 µg/mg, respectively. Analysis of bioactive compounds in M. olifera seeds showed the presence of hydroxy-cinnamic acids (6.07 µg/ml), flavanols (71.72 µg/ml), and hydroxyl benzoic acids (97.82 µg/ml). The results showed that M. oliefera seed extract at 48 and 56 mg/l was able to cure against the toxic effects of CuO-NPs. The significant changes were observed in G* and G1 for sero-hepatic enzymes, anti-oxidants and histological profile. The investigations of this study showed that M. olifera is a good curative agent against potential induced toxicity of CuO-NPs in O. mossambicus. The curative effect of M. olifera is attributed to the presence of higher amount of secondary metabolites and bioactive compounds. This study suggested the use of M. olifera to curate different ailments in fish and other organisms.

Keywords: CuO nanoparticles, curative, Moringa olifera, Oreochromis mossambicus

Procedia PDF Downloads 144
370 Treatment of Wastewater by Constructed Wetland Eco-Technology: Plant Species Alters the Performance and the Enrichment of Bacteria Ries Alters the Performance and the Enrichment of Bacteria

Authors: Kraiem Khadija, Hamadi Kallali, Naceur Jedidi

Abstract:

Constructed wetland systems are eco-technology recognized as environmentally friendly and emerging innovative solutions remediation as these systems are cost-effective and sustainable wastewater treatment systems. The performance of these biological system is affected by various factors such as plant, substrate, wastewater type, hydraulic loading rate, hydraulic retention time, water depth, and operation mood. The objective of this study was to to assess the alters of plant species on pollutants reduction and enrichment of anammox and nitrifing denitrifing bacteria in a modified vertical flow (VFCW) constructed wetland. This tests were carried out using three modified vertical constructed wetlands with a surface of 0.23 m² and depth 80 cm. It was a saturated vertical constructed wetland at the bottom. The saturation zone is maintained by the siphon structure at the outlet. The VFCW (₁) system was unplanted, VFCW (₂) planted with Typha angustofolia, and VFCW(₃) planted with Phragmites australis. The experimental units were fed with domestic wastewater and were operated by batch mode during 8 months at an average hydraulic loading rate around 20 cm day− 1. The operation cycle was two days feeding and five days rest. Results indicated that plants presence improved the removal efficiency; the removal rates of organic matter (85.1–90.9%; COD and 81.8–88.9%; BOD5), nitrogen (54.2–73%; NTK and 66–77%; NH4 -N) were higher by 10.7–30.1% compared to the unplanted vertical constructed wetland. On the other hand, the plant species had no significant effect on removal efficiency of COD, The removal of COD was similar in VFCW (₂) and VFCW (₃) (p > 0.05), attaining average removal efficiencies of 88.7% and 85.2%, respectively. Whereas it had a significant effect on NTK removal (p > 0.05), with an average removal rate of 72% versus 51% for VFCW (₂) and VFCW (₃), respectively. Among the three sets of vertical flow constructed wetlands, the VFCW(₂) removed the highest percent of total streptococcus, fecal streptococcus total coliforms, fecal coliforms, E. coli as 59, 62, 52, 63, and 58%, respectively. The presence and the plant species alters the community composition and abundance of the bacteria. The abundance of bacteria in the planted wetland was much higher than that in the unplanted one. VFCW(₃) had the highest relative abundance of nitrifying bacteria such as Nitrosospira (18%), Nitrosospira (12%), and Nitrobacter (8%). Whereas the vertical constructed wetland planted with typha had larger number of denitrifying species, with relative abundances of Aeromonas (13%), Paracoccus (11%), Thauera (7%), and Thiobacillus (6%). However, the abundance of nitrifying bacteria was very lower in this system than VFCW(₂). Interestingly, the presence of Thypha angustofolia species favored the enrichment of anammox bacteria compared to unplanted system and system planted with phragmites australis. The results showed that the middle layer had the most accumulation of anammox bacteria, which the anaerobic condition is better and the root system is moderate. Vegetation has several characteristics that make it an essential component of wetlands, but its exact effects are complex and debated.

Keywords: wastawater, constructed wetland, anammox, removal

Procedia PDF Downloads 104
369 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)

Authors: Eric Pla Erra, Mariana Jimenez Martinez

Abstract:

While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.

Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)

Procedia PDF Downloads 105
368 An Investigation into Why Very Few Small Start-Ups Business Survive for Longer Than Three Years: An Explanatory Study in the Context of Saudi Arabia

Authors: Motaz Alsolaim

Abstract:

Nowadays, the challenges of running a start-up can be very complex and are perhaps more difficult than at any other time in the past. Changes in technology, manufacturing innovation, and product development, combined with intense competition and market regulations are factors that have put pressure on classic ways of managing firms, thereby forcing change. As a result, the rate of closure, exit or discontinuation of start-ups and young businesses is very high. Despite the essential role of small firms in an economy, they still tend to face obstacles that exert a negative influence on their performance and rate of survival. In fact, it is not easy to determine with any certainty the reasons why small firms fail. For this reason, failure itself is not clearly defined, and its exact causes are hard to diagnose. In this current study, therefore, the barriers to survival will be covered more broadly, especially personal/entrepreneurial, enterprise and environmental factors with regard to various possible reasons for this failure, in order to determine the best solutions and make appropriate recommendations. Methodology: It could be argued that mixed methods might help to improve entrepreneurship research addressing challenges emphasis in previous studies and to achieve the triangulation. Calls for the combined use of quantitative and qualitative research were also made in the entrepreneurship field since entrepreneurship is a multi-faceted area of research. Therefore, explanatory sequential mixed method was used, using questionnaire online survey for entrepreneurs, followed by semi-structure interview. Collecting over 750 surveys and accepting 296 valid surveys, after that 13 interviews from government official seniors, businessmen successful entrepreneurs, and non-successful entrepreneurs. Findings: The first phase findings ( quantitative) shows the obstacles to survive; starting from the personal/ entrepreneurial factors such as; past work experience, lack of skills and interest, are positive factors, while; gender, age and education level of the owner are negative factors. Internal factors such as lack of marketing research and weak business planning are positive. The environmental factors; in economic perspectives; difficulty to find labors, in socio-cultural perspectives; Social restriction and traditions found to be a negative factors. In other hand, from the political perspective; cost of compliance and insufficient government plans found to be a positive factors for small business failure. From infrastructure perspective; lack of skills labor, high level of bureaucracy and lack of information are positive factors. Conclusion: This paper serves to enrich the understanding of failure factors in MENA region more precisely in SA, by minimizing the probability of failure in small-micro entrepreneurial start-up in SA, in the light of the Saudi government’s Vision 2030 plan.

Keywords: small business barriers, start-up business, entrepreneurship, Saudi Arabia

Procedia PDF Downloads 177
367 An Unusual Manifestation of Spirituality: Kamppi Chapel of Helsinki

Authors: Emine Umran Topcu

Abstract:

In both urban design and architecture, the primary goal is considered to be looking for ways in which people feel and think about space and place. Humans, in general, see a place as security and space as freedom and feel attached to place and long for space. Contemporary urban design manifests itself by addressing basic physical and psychological human needs. Not much attention is paid to transcendence. There seems to be a gap in the hierarchy of human needs. Usually, social aspects of public space are addressed through urban design. More personal and intimately scaled needs of an individual are neglected. How does built form contribute to an individual’s growth, contemplation, and exploration? In other words, a greater meaning in the immediate environment. Architects love to talk about meaning, poetics, attachment and other ethereal aspects of space that are not visible attributes of places. This paper aims at describing spirituality through built form with a personal experience of Kamppi Chapel of Helsinki. Experience covers various modes through which a person unfolds or constructs reality. Perception, sensation, emotion, and thought can be counted as for these modes. To experience is to get to know. What can be known is a construct of experience. Feelings and thoughts about space and place are very complex in human beings. They grow out of life experiences. The author had the chance of visiting Kamppi Chapel in April 2017, out of which the experience grew. The Kamppi Chapel is located on the South side of the busy Narinnka Square in central Helsinki. It offers a place to quiet down and compose oneself in a most lively urban space. With its curved wooden facade, the small building looks more like a museum than a chapel. It can be called a museum for contemplation. With its gently shaped interior, it embraces visitors and shields them from the hustle bustle of the city outside. Places of worship in all faiths signify sacred power. The author, having origins in a part of the world where domes and minarets dominate the cityscape, was impressed by the size and the architectural visibility of the Chapel. Anyone born and trained in such a tradition shares the inherent values and psychological mechanisms of spirituality, sacredness and the modest realities of their environment. Spirituality in all cultural traditions has not been analyzed and reinterpreted in new conceptual frameworks. Fundamentalists may reject this positivist attitude, but Kamppi Chapel as it stands does not look like it has a say like “I’m a model to be followed”. It just faces the task of representing a religious facility in an urban setting largely shaped by modern urban planning, which seems to the author as looking for a new definition of individual status. The quest between the established and the new is the demand for modern efficiency versus dogmatic rigidity. The architecture here has played a very promising and rewarding role for spirituality. The designers have been the translators for human desire for better life and aesthetic environment for an optimal satisfaction of local citizens and the visitors alike.

Keywords: architecture, Kamppi Chapel, spirituality, urban

Procedia PDF Downloads 182
366 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 292
365 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach

Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh

Abstract:

Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.

Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling

Procedia PDF Downloads 41
364 Comparative Analysis of Smart City Development: Assessing the Resilience and Technological Advancement in Singapore and Bucharest

Authors: Sînziana Iancu

Abstract:

In an era marked by rapid urbanization and technological advancement, the concept of smart cities has emerged as a pivotal solution to address the complex challenges faced by urban centres. As cities strive to enhance the quality of life for their residents, the development of smart cities has gained prominence. This study embarks on a comparative analysis of two distinct smart city models, Singapore and Bucharest, to assess their resilience and technological advancements. The significance of this study lies in its potential to provide valuable insights into the strategies, strengths, and areas of improvement in smart city development, ultimately contributing to the advancement of urban planning and sustainability. Methodologies: This comparative study employs a multifaceted approach to comprehensively analyse the smart city development in Singapore and Bucharest: * Comparative Analysis: A systematic comparison of the two cities is conducted, focusing on key smart city indicators, including digital infrastructure, integrated public services, urban planning and sustainability, transportation and mobility, environmental monitoring, safety and security, innovation and economic resilience, and community engagement; * Case Studies: In-depth case studies are conducted to delve into specific smart city projects and initiatives in both cities, providing real-world examples of their successes and challenges; * Data Analysis: Official reports, statistical data, and relevant publications are analysed to gather quantitative insights into various aspects of smart city development. Major Findings: Through a comprehensive analysis of Singapore and Bucharest's smart city development, the study yields the following major findings: * Singapore excels in digital infrastructure, integrated public services, safety, and innovation, showcasing a high level of resilience across these domains; * Bucharest is in the early stages of smart city development, with notable potential for growth in digital infrastructure and community engagement.; * Both cities exhibit a commitment to sustainable urban planning and environmental monitoring, with room for improvement in integrating these aspects into everyday life; * Transportation and mobility solutions are a priority for both cities, with Singapore having a more advanced system, while Bucharest is actively working on improving its transportation infrastructure; * Community engagement, while important, requires further attention in both cities to enhance the inclusivity of smart city initiatives. Conclusion: In conclusion, this study serves as a valuable resource for urban planners, policymakers, and stakeholders in understanding the nuances of smart city development and resilience. While Singapore stands as a beacon of success in various smart city indicators, Bucharest demonstrates potential and a willingness to adapt and grow in this domain. As cities worldwide embark on their smart city journeys, the lessons learned from Singapore and Bucharest provide invaluable insights into the path toward urban sustainability and resilience in the digital age.

Keywords: bucharest, resilience, Singapore, smart city

Procedia PDF Downloads 69
363 Nurturing Scientific Minds: Enhancing Scientific Thinking in Children (Ages 5-9) through Experiential Learning in Kids Science Labs (STEM)

Authors: Aliya K. Salahova

Abstract:

Scientific thinking, characterized by purposeful knowledge-seeking and the harmonization of theory and facts, holds a crucial role in preparing young minds for an increasingly complex and technologically advanced world. This abstract presents a research study aimed at fostering scientific thinking in early childhood, focusing on children aged 5 to 9 years, through experiential learning in Kids Science Labs (STEM). The study utilized a longitudinal exploration design, spanning 240 weeks from September 2018 to April 2023, to evaluate the effectiveness of the Kids Science Labs program in developing scientific thinking skills. Participants in the research comprised 72 children drawn from local schools and community organizations. Through a formative psychology-pedagogical experiment, the experimental group engaged in weekly STEM activities carefully designed to stimulate scientific thinking, while the control group participated in daily art classes for comparison. To assess the scientific thinking abilities of the participants, a registration table with evaluation criteria was developed. This table included indicators such as depth of questioning, resource utilization in research, logical reasoning in hypotheses, procedural accuracy in experiments, and reflection on research processes. The data analysis revealed dynamic fluctuations in the number of children at different levels of scientific thinking proficiency. While the development was not uniform across all participants, a main leading factor emerged, indicating that the Kids Science Labs program and formative experiment exerted a positive impact on enhancing scientific thinking skills in children within this age range. The study's findings support the hypothesis that systematic implementation of STEM activities effectively promotes and nurtures scientific thinking in children aged 5-9 years. Enriching education with a specially planned STEM program, tailoring scientific activities to children's psychological development, and implementing well-planned diagnostic and corrective measures emerged as essential pedagogical conditions for enhancing scientific thinking abilities in this age group. The results highlight the significant and positive impact of the systematic-activity approach in developing scientific thinking, leading to notable progress and growth in children's scientific thinking abilities over time. These findings have promising implications for educators and researchers, emphasizing the importance of incorporating STEM activities into educational curricula to foster scientific thinking from an early age. This study contributes valuable insights to the field of science education and underscores the potential of STEM-based interventions in shaping the future scientific minds of young children.

Keywords: Scientific thinking, education, STEM, intervention, Psychology, Pedagogy, collaborative learning, longitudinal study

Procedia PDF Downloads 61
362 The Concept of Path in Original Buddhism and the Concept of Psychotherapeutic Improvement

Authors: Beth Jacobs

Abstract:

The landmark movement of Western clinical psychology in the 20th century was the development of psychotherapy. The landmark movement of clinical psychology in the 21st century will be the absorption of meditation practices from Buddhist psychology. While millions of people explore meditation and related philosophy, very few people are exposed to the materials of original Buddhism on this topic, especially to the Theravadan Abhidharma. The Abhidharma is an intricate system of lists and matrixes that were used to understand and remember Buddha’s teaching. The Abhidharma delineates the first psychological system of Buddhism, how the mind works in the universe of reality and why meditation training strengthens and purifies the experience of life. Its lists outline the psychology of mental constructions, perception, emotion and cosmological causation. While the Abhidharma is technical, elaborate and complex, its essential purpose relates to the central purpose of clinical psychology: to relieve human suffering. Like Western depth psychology, the methodology rests on understanding underlying processes of consciousness and perception. What clinical psychologists might describe as therapeutic improvement, the Abhidharma delineates as a specific pathway of purified actions of consciousness. This paper discusses the concept of 'path' as presented in aspects of the Theravadan Abhidharma and relates this to current clinical psychological views of therapy outcomes and gains. The core path in Buddhism is the Eight-Fold Path, which is the fourth noble truth and the launching of activity toward liberation. The path is not composed of eight ordinal steps; it’s eight-fold and is described as opening the way, not funneling choices. The specific path in the Abhidharma is described in many steps of development of consciousness activities. The path is not something a human moves on, but something that moments of consciousness develop within. 'Cittas' are extensively described in the Abhidharma as the atomic-level unit of a raw action of consciousness touching upon an object in a field, and there are 121 types of cittas categorized. The cittas are embedded in the mental factors, which could be described as the psychological packaging elements of our experiences of consciousness. Based on these constellations of infinitesimal, linked occurrences of consciousness, citta are categorized by dimensions of purification. A path is a chain of citta developing through causes and conditions. There are no selves, no pronouns in the Abhidharma. Instead of me walking a path, this is about a person working with conditions to cultivate a stream of consciousness that is pure, immediate, direct and generous. The same effort, in very different terms, informs the work of most psychotherapies. Depth psychology seeks to release the bound, unconscious elements of mental process into the clarity of realization. Cognitive and behavioral psychologies work on breaking down automatic thought valuations and actions, changing schemas and interpersonal dynamics. Understanding how the original Buddhist concept of positive human development relates to the clinical psychological concept of therapy weaves together two brilliant systems of thought on the development of human well being.

Keywords: Abhidharma, Buddhist path, clinical psychology, psychotherapeutic outcome

Procedia PDF Downloads 213
361 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 271
360 Integrations of the Instructional System Design for Students Learning Achievement Motives and Science Attitudes with Stem Educational Model on Stoichiometry Issue in Chemistry Classes with Different Genders

Authors: Tiptunya Duangsri, Panwilai Chomchid, Natchanok Jansawang

Abstract:

This research study was to investigate of education decisions must be made which a part of it should be passed on to future generations as obligatory for all members of a chemistry class for students who will prepare themselves for a special position. The descriptions of instructional design were provided and the recent criticisms are discussed. This research study to an outline of an integrative framework for the description of information and the instructional design model give structure to negotiate a semblance of conscious understanding. The aims of this study are to describe the instructional design model for comparisons between students’ genders of their effects on STEM educational learning achievement motives to their science attitudes and logical thinking abilities with a sample size of 18 students at the 11th grade level with the cluster random sampling technique in Mahawichanukul School were designed. The chemistry learning environment was administered with the STEM education method. To build up the 5-instrument lesson instructional plan issues were instructed innovations, the 30-item Logical Thinking Test (LTT) on 5 scales, namely; Inference, Recognition of Assumptions, Deduction, Interpretation and Evaluation scales was used. Students’ responses of their perceptions with the Test Of Chemistry-Related Attitude (TOCRA) were assessed of their attitude in science toward chemistry. The validity from Index Objective Congruence value (IOC) checked by five expert specialist educator in two chemistry classroom targets in STEM education, the E1/E2 process were equaled evidence of 84.05/81.42 which results based on criteria are higher than of 80/80 standard level with the IOC from the expert educators. Comparisons between students’ learning achievement motives with STEM educational model on stoichiometry issue in chemistry classes with different genders were differentiated at evidence level of .05, significantly. Associations between students’ learning achievement motives on their posttest outcomes and logical thinking abilities, the predictive efficiency (R2) values indicate that 69% and 70% of the variances in different male and female student groups of their logical thinking abilities. The predictive efficiency (R2) values indicate that 73%; and 74% of the variances in different male and female student groups of their science attitudes toward chemistry were associated. Statistically significant on students’ perceptions of their chemistry learning classroom environment and their science attitude toward chemistry when using the MCI and TOCRA, the predictive efficiency (R2) values indicated that 72% and 74% of the variances in different male and female student groups of their chemistry classroom climate, consequently. Suggestions that supporting chemistry or science teachers from science, technology, engineering and mathematics (STEM) in addressing complex teaching and learning issues related instructional design to develop, teach, and assess traditional are important strategies with a focus on STEM education instructional method.

Keywords: development, the instructional design model, students learning achievement motives, science attitudes with STEM educational model, stoichiometry issue, chemistry classes, genders

Procedia PDF Downloads 275
359 Correlation Analysis between Sensory Processing Sensitivity (SPS), Meares-Irlen Syndrome (MIS) and Dyslexia

Authors: Kaaryn M. Cater

Abstract:

Students with sensory processing sensitivity (SPS), Meares-Irlen Syndrome (MIS) and dyslexia can become overwhelmed and struggle to thrive in traditional tertiary learning environments. An estimated 50% of tertiary students who disclose learning related issues are dyslexic. This study explores the relationship between SPS, MIS and dyslexia. Baseline measures will be analysed to establish any correlation between these three minority methods of information processing. SPS is an innate sensitivity trait found in 15-20% of the population and has been identified in over 100 species of animals. Humans with SPS are referred to as Highly Sensitive People (HSP) and the measure of HSP is a 27 point self-test known as the Highly Sensitive Person Scale (HSPS). A 2016 study conducted by the author established base-line data for HSP students in a tertiary institution in New Zealand. The results of the study showed that all participating HSP students believed the knowledge of SPS to be life-changing and useful in managing life and study, in addition, they believed that all tutors and in-coming students should be given information on SPS. MIS is a visual processing and perception disorder that is found in approximately 10% of the population and has a variety of symptoms including visual fatigue, headaches and nausea. One way to ease some of these symptoms is through the use of colored lenses or overlays. Dyslexia is a complex phonological based information processing variation present in approximately 10% of the population. An estimated 50% of dyslexics are thought to have MIS. The study exploring possible correlations between these minority forms of information processing is due to begin in February 2017. An invitation will be extended to all first year students enrolled in degree programmes across all faculties and schools within the institution. An estimated 900 students will be eligible to participate in the study. Participants will be asked to complete a battery of on-line questionnaires including the Highly Sensitive Person Scale, the International Dyslexia Association adult self-assessment and the adapted Irlen indicator. All three scales have been used extensively in literature and have been validated among many populations. All participants whose score on any (or some) of the three questionnaires suggest a minority method of information processing will receive an invitation to meet with a learning advisor, and given access to counselling services if they choose. Meeting with a learning advisor is not mandatory, and some participants may choose not to receive help. Data will be collected using the Question Pro platform and base-line data will be analysed using correlation and regression analysis to identify relationships and predictors between SPS, MIS and dyslexia. This study forms part of a larger three year longitudinal study and participants will be required to complete questionnaires at annual intervals in subsequent years of the study until completion of (or withdrawal from) their degree. At these data collection points, participants will be questioned on any additional support received relating to their minority method(s) of information processing. Data from this study will be available by April 2017.

Keywords: dyslexia, highly sensitive person (HSP), Meares-Irlen Syndrome (MIS), minority forms of information processing, sensory processing sensitivity (SPS)

Procedia PDF Downloads 245
358 Polar Bears in Antarctica: An Analysis of Treaty Barriers

Authors: Madison Hall

Abstract:

The Assisted Colonization of Polar Bears to Antarctica requires a careful analysis of treaties to understand existing legal barriers to Ursus maritimus transport and movement. An absence of land-based migration routes prevent polar bears from accessing southern polar regions on their own. This lack of access is compounded by current treaties which limit human intervention and assistance to ford these physical and legal barriers. In a time of massive planetary extinctions, Assisted Colonization posits that certain endangered species may be prime candidates for relocation to hospitable environments to which they have never previously had access. By analyzing existing treaties, this paper will examine how polar bears are limited in movement by humankind’s legal barriers. International treaties may be considered codified reflections of anthropocentric values of the best knowledge and understanding of an identified problem at a set point in time, as understood through the human lens. Even as human social values and scientific insights evolve, so too must treaties evolve which specify legal frameworks and structures impacting keystone species and related biomes. Due to costs and other myriad difficulties, only a very select number of species will be given this opportunity. While some species move into new regions and are then deemed invasive, Assisted Colonization considers that some assistance may be mandated due to the nature of humankind’s role in climate change. This moral question and ethical imperative against the backdrop of escalating climate impacts, drives the question forward; what is the potential for successfully relocating a select handful of charismatic and ecologically important life forms? Is it possible to reimagine a different, but balanced Antarctic ecosystem? Listed as a threatened species under the U.S. Endangered Species Act, a result of the ongoing loss of critical habitat by melting sea ice, polar bears have limited options for long term survival in the wild. Our current regime for safeguarding animals facing extinction frequently utilizes zoos and their breeding programs, to keep alive the genetic diversity of the species until some future time when reintroduction, somewhere, may be attempted. By exploring the potential for polar bears to be relocated to Antarctica, we must analyze the complex ethical, legal, political, financial, and biological realms, which are the backdrop to framing all questions in this arena. Can we do it? Should we do it? By utilizing an environmental ethics perspective, we propose that the Ecological Commons of the Arctic and Antarctic should not be viewed solely through the lens of human resource management needs. From this perspective, polar bears do not need our permission, they need our assistance. Antarctica therefore represents a second, if imperfect chance, to buy time for polar bears, in a world where polar regimes, not yet fully understood, are themselves quickly changing as a result of climate change.

Keywords: polar bear, climate change, environmental ethics, Arctic, Antarctica, assisted colonization, treaty

Procedia PDF Downloads 421
357 Computer Based Identification of Possible Molecular Targets for Induction of Drug Resistance Reversion in Multidrug Resistant Mycobacterium Tuberculosis

Authors: Oleg Reva, Ilya Korotetskiy, Marina Lankina, Murat Kulmanov, Aleksandr Ilin

Abstract:

Molecular docking approaches are widely used for design of new antibiotics and modeling of antibacterial activities of numerous ligands which bind specifically to active centers of indispensable enzymes and/or key signaling proteins of pathogens. Widespread drug resistance among pathogenic microorganisms calls for development of new antibiotics specifically targeting important metabolic and information pathways. A generally recognized problem is that almost all molecular targets have been identified already and it is getting more and more difficult to design innovative antibacterial compounds to combat the drug resistance. A promising way to overcome the drug resistance problem is an induction of reversion of drug resistance by supplementary medicines to improve the efficacy of the conventional antibiotics. In contrast to well established computer-based drug design, modeling of drug resistance reversion still is in its infancy. In this work, we proposed an approach to identification of compensatory genetic variants reducing the fitness cost associated with the acquisition of drug resistance by pathogenic bacteria. The approach was based on an analysis of the population genetic of Mycobacterium tuberculosis and on results of experimental modeling of the drug resistance reversion induced by a new anti-tuberculosis drug FS-1. The latter drug is an iodine-containing nanomolecular complex that passed clinical trials and was admitted as a new medicine against MDR-TB in Kazakhstan. Isolates of M. tuberculosis obtained on different stages of the clinical trials and also from laboratory animals infected with MDR-TB strain were characterized by antibiotic resistance, and their genomes were sequenced by the paired-end Illumina HiSeq 2000 technology. A steady increase in sensitivity to conventional anti-tuberculosis antibiotics in series of isolated treated with FS-1 was registered despite the fact that the canonical drug resistance mutations identified in the genomes of these isolates remained intact. It was hypothesized that the drug resistance phenotype in M. tuberculosis requires an adjustment of activities of many genes to compensate the fitness cost of the drug resistance mutations. FS-1 cased an aggravation of the fitness cost and removal of the drug-resistant variants of M. tuberculosis from the population. This process caused a significant increase in genetic heterogeneity of the Mtb population that was not observed in the positive and negative controls (infected laboratory animals left untreated and treated solely with the antibiotics). A large-scale search for linkage disequilibrium associations between the drug resistance mutations and genetic variants in other genomic loci allowed identification of target proteins, which could be influenced by supplementary drugs to increase the fitness cost of the drug resistance and deprive the drug-resistant bacterial variants of their competitiveness in the population. The approach will be used to improve the efficacy of FS-1 and also for computer-based design of new drugs to combat drug-resistant infections.

Keywords: complete genome sequencing, computational modeling, drug resistance reversion, Mycobacterium tuberculosis

Procedia PDF Downloads 263
356 Multiscale Modelization of Multilayered Bi-Dimensional Soils

Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur

Abstract:

Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.

Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets

Procedia PDF Downloads 125
355 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 151
354 Globalisation and Diplomacy: How Can Small States Improve the Practice of Diplomacy to Secure Their Foreign Policy Objectives?

Authors: H. M. Ross-McAlpine

Abstract:

Much of what is written on diplomacy, globalization and the global economy addresses the changing nature of relationships between major powers. While the most dramatic and influential changes have resulted from these developing relationships the world is not, on deeper inspection, governed neatly by major powers. Due to advances in technology, the shifting balance of power and a changing geopolitical order, small states have the ability to exercise a greater influence than ever before. Increasingly interdependent and ever complex, our world is too delicate to be handled by a mighty few. The pressure of global change requires small states to adapt their diplomatic practices and diversify their strategic alliances and relationships. The nature and practice of diplomacy must be re-evaluated in light of the pressures resulting from globalization. This research examines: how small states can best secure their foreign policy objectives? Small state theory is used as a foundation for exploring the case study of New Zealand. The research draws on secondary sources to evaluate the existing theory in relation to modern practices of diplomacy. As New Zealand lacks the required economic and military power to play an active, influential role in international affairs what strategies are used to exert influence? Furthermore, New Zealand lies in a remote corner of the Pacific and is geographically isolated from its nearest neighbors how does this affect security and trade priorities? The findings note a significant shift since the 1970’s in New Zealand’s diplomatic relations. This shift is arguably a direct result of globalization, regionalism and a growing independence from the traditional bi-lateral relationships. The need to source predictable trade, investment and technology are an essential driving force for New Zealand’s diplomatic relations. A lack of hard power aligns New Zealand’s prosperity with a secure, rules-based international system that increases the likelihood of a stable and secure global order. New Zealand’s diplomacy and prosperity has been intrinsically reliant on its reputation. A vital component of New Zealand’s diplomacy is preserving a reputation for integrity and global responsibility. It is the use of this soft power that facilitates the influence that New Zealand enjoys on the world stage. To weave a comprehensive network of successful diplomatic relationships, New Zealand must maintain a reputation of international credibility. Globalization has substantially influenced the practice of diplomacy for New Zealand. The current world order places economic and military might in the hands of a few, subsequently requiring smaller states to use other means for securing their interests. There are clear strategies evident in New Zealand’s diplomacy practice that draw attention to how other smaller states might best secure their foreign policy objectives. While these findings are limited, as with all case study research, there is value in applying the findings to other small states struggling to secure their interests in the wake of rapid globalization.

Keywords: diplomacy, foreign policy, globalisation, small state

Procedia PDF Downloads 396
353 Attention Treatment for People With Aphasia: Language-Specific vs. Domain-General Neurofeedback

Authors: Yael Neumann

Abstract:

Attention deficits are common in people with aphasia (PWA). Two treatment approaches address these deficits: domain-general methods like Play Attention, which focus on cognitive functioning, and domain-specific methods like Language-Specific Attention Treatment (L-SAT), which use linguistically based tasks. Research indicates that L-SAT can improve both attentional deficits and functional language skills, while Play Attention has shown success in enhancing attentional capabilities among school-aged children with attention issues compared to standard cognitive training. This study employed a randomized controlled cross-over single-subject design to evaluate the effectiveness of these two attention treatments over 25 weeks. Four PWA participated, undergoing a battery of eight standardized tests measuring language and cognitive skills. The treatments were counterbalanced. Play Attention used EEG sensors to detect brainwaves, enabling participants to manipulate items in a computer game while learning to suppress theta activity and increase beta activity. An algorithm tracked changes in the theta-to-beta ratio, allowing points to be earned during the games. L-SAT, on the other hand, involved hierarchical language tasks that increased in complexity, requiring greater attention from participants. Results showed that for language tests, Participant 1 (moderate aphasia) aligned with existing literature, showing L-SAT was more effective than Play Attention. However, Participants 2 (very severe) and 3 and 4 (mild) did not conform to this pattern; both treatments yielded similar outcomes. This may be due to the extremes of aphasia severity: the very severe participant faced significant overall deficits, making both approaches equally challenging, while the mild participant performed well initially, leaving limited room for improvement. In attention tests, Participants 1 and 4 exhibited results consistent with prior research, indicating Play Attention was superior to L-SAT. Participant 2, however, showed no significant improvement with either program, although L-SAT had a slight edge on the Visual Elevator task, measuring switching and mental flexibility. This advantage was not sustained at the one-month follow-up, likely due to the participant’s struggles with complex attention tasks. Participant 3's results similarly did not align with prior studies, revealing no difference between the two treatments, possibly due to the challenging nature of the attention measures used. Regarding participation and ecological tests, all participants showed similar mild improvements with both treatments. This limited progress could stem from the short study duration, with only five weeks allocated for each treatment, which may not have been enough time to achieve meaningful changes affecting life participation. In conclusion, the performance of participants appeared influenced by their level of aphasia severity. The moderate PWA’s results were most aligned with existing literature, indicating better attention improvement from the domain-general approach (Play Attention) and better language improvement from the domain-specific approach (L-SAT).

Keywords: attention, language, cognitive rehabilitation, neurofeedback

Procedia PDF Downloads 17
352 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems

Authors: Ibram Khalafalla Roshdy Shokry

Abstract:

This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.

Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA

Procedia PDF Downloads 25
351 Exploring the Impact of Mobility-Related Treatments (Drug and Non-Pharmacological) on Independence and Wellbeing in Parkinson’s Disease - A Qualitative Synthesis

Authors: Cameron Wilson, Megan Hanrahan, Katie Brittain, Riona McArdle, Alison Keogh, Lynn Rochester

Abstract:

Background: The loss of mobility and functional dependence is a significant marker in the progression of neurodegenerative diseases such as Parkinson’s Disease (PD). Pharmacological, surgical, and therapeutic treatments are available that can help in the management and amelioration of PD symptoms; however, these only prolong more severe symptoms. Accordingly, ensuring people with PD can maintain independence and a healthy wellbeing are essential in establishing an effective treatment option for those afflicted. Existing literature reviews have examined experiences in engaging with PD treatment options and the impact of PD on independence and wellbeing. Although, the literature fails to explore the influence of treatment options on independence and wellbeing and therefore misses what people value in their treatment. This review is the first that synthesises the impact of mobility-related treatments on independence and wellbeing in people with PD and their carers, offering recommendations to clinical practice and provides a conceptual framework (in development) for future research and practice. Objectives: To explore the impact of mobility-related treatment (both pharmacological and non-pharmacological) on the independence and wellbeing of people with PD and their carers. To propose a conceptual framework to patients, carers and clinicians which captures the qualities people with PD value as part of their treatment. Methods: We performed a critical interpretive synthesis of qualitative evidence, searching six databases for reports that explored the impact of mobility-related treatments (both drug and non-pharmacological) on independence and wellbeing in Parkinson’s Disease. The types of treatments included medication (Levodopa and Amantadine), dance classes, Deep-Brain Stimulation, aquatic therapies, physical rehabilitation, balance training and foetal transplantation. Data was extracted, and quality was assessed using an adapted version of the NICE Quality Appraisal Tool Appendix H before being synthesised according to the critical interpretive synthesis framework and meta-ethnography process. Results: From 2301 records, 28 were eligible. Experiences and impact of treatment pathway on independence and wellbeing was similar across all types of treatments and are described by five inter-related themes: (i) desire to maintain independence, (ii) treatment as a social experience during and after, (iii) medication to strengthen emotional health, (iv) recognising physical capacity and (v) emphasising the personal journey of Parkinson’s treatments. Conclusion: There is a complex and inter-related experience and effect of PD treatments common across all types of treatment. The proposed conceptual framework (in development) provides patients, carers, and clinicians recommendations to personalise the delivery of PD treatment, thereby potentially improving adherence and effectiveness. This work is vital to disseminate as PD treatment transitions from subjective and clinically captured assessments to a more personalised process supplemented using wearable technology.

Keywords: parkinson's disease, medication, treatment, dance, review, healthcare, delivery, levodopa, social, emotional, psychological, personalised healthcare

Procedia PDF Downloads 89
350 Development of Alternative Fuels Technologies for Transportation

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej

Abstract:

Currently, in automotive transport to power vehicles, almost exclusively hydrocarbon based fuels are used. Due to increase of hydrocarbon fuels consumption, quality parameters are tightend for clean environment. At the same time efforts are undertaken for development of alternative fuels. The reasons why looking for alternative fuels for petroleum and diesel are: to increase vehicle efficiency and to reduce the environmental impact, reduction of greenhouse gases emissions and savings in consumption of limited oil resources. Significant progress was performed on development of alternative fuels such as methanol, ethanol, natural gas (CNG / LNG), LPG, dimethyl ether (DME) and biodiesel. In addition, biggest vehicle manufacturers work on fuel cell vehicles and its introduction to the market. Alcohols such as methanol and ethanol create the perfect fuel for spark-ignition engines. Their advantages are high-value antiknock which determines their application as additive (10%) to unleaded petrol and relative purity of produced exhaust gasses. Ethanol is produced in distillation process of plant products, which value as a food can be irrational. Ethanol production can be costly also for the entire economy of the country, because it requires a large complex distillation plants, large amounts of biomass and finally a significant amount of fuel to sustain the process. At the same time, the fermentation process of plants releases into the atmosphere large quantities of carbon dioxide. Natural gas cannot be directly converted into liquid fuels, although such arrangements have been proposed in the literature. Going through stage of intermediates is inevitable yet. Most popular one is conversion to methanol, which can be processed further to dimethyl ether (DME) or olefin (ethylene and propylene) for the petrochemical sector. Methanol uses natural gas as a raw material, however, requires expensive and advanced production processes. In relation to pollution emissions, the optimal vehicle fuel is LPG which is used in many countries as an engine fuel. Production of LPG is inextricably linked with production and processing of oil and gas, and which represents a small percentage. Its potential as an alternative for traditional fuels is therefore proportionately reduced. Excellent engine fuel may be biogas, however, follows to the same limitations as ethanol - the same production process is used and raw materials. Most essential fuel in the campaign of environment protection against pollution is natural gas. Natural gas as fuel may be either compressed (CNG) or liquefied (LNG). Natural gas can also be used for hydrogen production in steam reforming. Hydrogen can be used as a basic starting material for the chemical industry, an important raw material in the refinery processes, as well as a fuel vehicle transportation. Natural gas can be used as CNG which represents an excellent compromise between the availability of the technology that is proven and relatively cheap to use in many areas of the automotive industry. Natural gas can also be seen as an important bridge to other alternative sources of energy derived from fuel and harmless to the environment. For these reasons CNG as a fuel stimulates considerable interest in the worldwide.

Keywords: alternative fuels, CNG (Compressed Natural Gas), LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles)

Procedia PDF Downloads 181
349 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 195
348 Computational and Experimental Study of the Mechanics of Heart Tube Formation in the Chick Embryo

Authors: Hadi S. Hosseini, Larry A. Taber

Abstract:

In the embryo, heart is initially a simple tubular structure that undergoes complex morphological changes as it transforms into a four-chambered pump. This work focuses on mechanisms that create heart tube (HT). The early embryo is composed of three relatively flat primary germ layers called endoderm, mesoderm, and ectoderm. Precardiac cells located within bilateral regions of the mesoderm called heart fields (HFs) fold and fuse along the embryonic midline to create the HT. The right and left halves of this plate fold symmetrically to bring their upper edges into contact along the midline, where they fuse. In a region near the fusion line, these layers then separate to generate the primitive HT and foregut, which then extend vertically. The anterior intestinal portal (AIP) is the opening at the caudal end of the foregut, which descends as the HT lengthens. The biomechanical mechanisms that drive this folding are poorly understood. Our central hypothesis is that folding is caused by differences in growth between the endoderm and mesoderm while subsequent extension is driven by contraction along the AIP. The feasibility of this hypothesis is examined using experiments with chick embryos and finite-element modeling (FEM). Fertilized white Leghorn chicken eggs were incubated for approximately 22-33 hours until appropriate Hamburger and Hamilton stage (HH5 to HH9) was reached. To inhibit contraction, embryos were cultured in media containing blebbistatin (myosin II inhibitor) for 18h. Three-dimensional models were created using ABAQUS (D. S. Simulia). The initial geometry consists of a flat plate including two layers representing the mesoderm and endoderm. Tissue was considered as a nonlinear elastic material with growth and contraction (negative growth) simulated using a theory, in which the total deformation gradient is given by F=F^*.G, where G is growth tensor and F* is the elastic deformation gradient tensor. In embryos exposed to blebbistatin, initial folding and AIP descension occurred normally. However, after HFs partially fused to create the upper part of the HT, fusion, and AIP descension stopped, and the HT failed to grow longer. These results suggest that cytoskeletal contraction is required only for the later stages of HT formation. In the model, a larger biaxial growth rate in the mesoderm compared to the endoderm causes the bilayered plate to bend ventrally, as the upper edge moves toward the midline, where it 'fuses' with the other half . This folding creates the upper section of the HT, as well as the foregut pocket bordered by the AIP. After this phase completes by stage HH7, contraction along the arch-shaped AIP pulls the lower edge of the plate downward, stretching the two layers. Results given by model are in reasonable agreement with experimental data for the shape of HT, as well as patterns of stress and strain. In conclusion, results of our study support our hypothesis for the creation of the heart tube.

Keywords: heart tube formation, FEM, chick embryo, biomechanics

Procedia PDF Downloads 296
347 A Protocol Study of Accessibility: Physician’s Perspective Regarding Disability and Continuum of Care

Authors: Sidra Jawed

Abstract:

The accessibility constructs and the body privilege discourse has been a major problem while dealing with health inequities and inaccessibility. The inherent problem in this arbitrary view of disability is that disability would never be the productive way of living. For past thirty years, disability activists have been working to differentiate ‘impairment’ from ‘disability’ and probing for more understanding of limitation imposed by society, this notion is ultimately known as the Social Model of Disability. The vulnerable population as disability community remains marginalized and seen relentlessly fighting to highlight the importance of social factors. It does not only constitute physical architectural barriers and famous blue symbol of access to the healthcare but also invisible, intangible barriers as attitudes and behaviours. Conventionally the idea of ‘disability’ has been laden with prejudiced perception amalgamating with biased attitude. Equity in contemporary setup necessitates the restructuring of organizational structure. Apparently simple, the complex interplay of disability and contemporary healthcare set up often ends up at negotiating vital components of basic healthcare needs. The role of society is indispensable when it comes to people with disability (PWD), everything from the access to healthcare to timely interventions are strongly related to the set up in place and the attitude of healthcare providers. It is vital to understand the association between assumptions and the quality of healthcare PWD receives in our global healthcare setup. Most of time the crucial physician-patient relationship with PWD is governed by the negative assumptions of the physicians. The multifaceted, troubled patient-physicians’ relationship has been neglected in past. To compound it, insufficient work has been done to explore physicians’ perspective about the disability and access to healthcare PWD have currently. This research project is directed towards physicians’ perspective on the intersection of health and access of healthcare for PWD. The principal aim of the study is to explore the perception of disability in family medicine physicians, highlighting the underpinning of medical perspective in healthcare institution. In the quest of removing barriers, the first step must be to identify the barriers and formulate a plan for future policies, involving all the stakeholders. There would be semi-structured interviews to explore themes as accessibility, medical training, construct of social model and medical model of disability, time limitations, financial constraints. The main research interest is to identify the obstacles to inclusion and marginalization continuing from the basic living necessities to wide health inequity in present society. Physicians point of view is largely missing from the research landscape and the current forum of knowledge with regards to physicians’ standpoint. This research will provide policy makers with a starting point and comprehensive background knowledge that can be a stepping stone for future researches and furthering the knowledge translation process to strengthen healthcare. Additionally, it would facilitate the process of knowledge translation between the much needed medical and disability community.

Keywords: disability, physicians, social model, accessibility

Procedia PDF Downloads 222