Search results for: artificial neural network approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19009

Search results for: artificial neural network approach

1459 Improving Photocatalytic Efficiency of TiO2 Films Incorporated with Natural Geopolymer for Sunlight-Driven Water Purification

Authors: Satam Alotibi, Haya A. Al-Sunaidi, Almaymunah M. AlRoibah, Zahraa H. Al-Omaran, Mohammed Alyami, Fatehia S. Alhakami, Abdellah Kaiba, Mazen Alshaaer, Talal F. Qahtan

Abstract:

This research study presents a novel approach to harnessing the potential of natural geopolymer in conjunction with TiO₂ nanoparticles (TiO₂ NPs) for the development of highly efficient photocatalytic materials for water decontamination. The study begins with the formulation of a geopolymer paste derived from natural sources, which is subsequently applied as a coating on glass substrates and allowed to air-dry at room temperature. The result is a series of geopolymer-coated glass films, serving as the foundation for further experimentation. To enhance the photocatalytic capabilities of these films, a critical step involves immersing them in a suspension of TiO₂ nanoparticles (TiO₂ NPs) in water for varying durations. This immersion process yields geopolymer-loaded TiO₂ NPs films with varying concentrations, setting the stage for comprehensive characterization and analysis. A range of advanced analytical techniques, including UV-Vis spectroscopy, Fourier-transform infrared spectroscopy (FTIR), Raman spectroscopy, scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and atomic force microscopy (AFM), were meticulously employed to assess the structural, morphological, and chemical properties of the geopolymer-based TiO₂ films. These analyses provided invaluable insights into the materials' composition and surface characteristics. The culmination of this research effort sees the geopolymer-based TiO₂ films being repurposed as immobilized photocatalytic reactors for water decontamination under natural sunlight irradiation. Remarkably, the results revealed exceptional photocatalytic performance that exceeded the capabilities of conventional TiO₂-based photocatalysts. This breakthrough underscores the significant potential of natural geopolymer as a versatile and highly effective matrix for enhancing the photocatalytic efficiency of TiO₂ nanoparticles in water treatment applications. In summary, this study represents a significant advancement in the quest for sustainable and efficient photocatalytic materials for environmental remediation. By harnessing the synergistic effects of natural geopolymer and TiO₂ nanoparticles, these geopolymer-based films exhibit outstanding promise in addressing water decontamination challenges and contribute to the development of eco-friendly solutions for a cleaner and healthier environment.

Keywords: geopolymer, TiO2 nanoparticles, photocatalytic materials, water decontamination, sustainable remediation

Procedia PDF Downloads 67
1458 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics

Authors: Michael Lousis

Abstract:

This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.

Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors

Procedia PDF Downloads 190
1457 Nonlinear Interaction of Free Surface Sloshing of Gaussian Hump with Its Container

Authors: Mohammad R. Jalali

Abstract:

Movement of liquid with a free surface in a container is known as slosh. For instance, slosh occurs when water in a closed tank is set in motion by a free surface displacement, or when liquid natural gas in a container is vibrated by an external driving force, such as an earthquake or movement induced by transport. Slosh is also derived from resonant switching of a natural basin. During sloshing, different types of motion are produced by energy exchange between the liquid and its container. In present study, a numerical model is developed to simulate the nonlinear even harmonic oscillations of free surface sloshing of an initial disturbance to the free surface of a liquid in a closed square basin. The response of the liquid free surface is affected by amplitude and motion frequencies of its container; therefore, sloshing involves complex fluid-structure interactions. In the present study, nonlinear interaction of free surface sloshing of an initial Gaussian hump with its uneven container is predicted numerically. For this purpose, Green-Naghdi (GN) equations are applied as governing equation of fluid field to produce nonlinear second-order and higher-order wave interactions. These equations reduce the dimensions from three to two, yielding equations that can be solved efficiently. The GN approach assumes a particular flow kinematic structure in the vertical direction for shallow and deep-water problems. The fluid velocity profile is finite sum of coefficients depending on space and time multiplied by a weighting function. It should be noted that in GN theory, the flow is rotational. In this study, GN numerical simulations of initial Gaussian hump are compared with Fourier series semi-analytical solutions of the linearized shallow water equations. The comparison reveals that satisfactory agreement exists between the numerical simulation and the analytical solution of the overall free surface sloshing patterns. The resonant free surface motions driven by an initial Gaussian disturbance are obtained by Fast Fourier Transform (FFT) of the free surface elevation time history components. Numerically predicted velocity vectors and magnitude contours for the free surface patterns indicate that interaction of Gaussian hump with its container has localized effect. The result of this sloshing is applicable to the design of stable liquefied oil containers in tankers and offshore platforms.

Keywords: fluid-structure interactions, free surface sloshing, Gaussian hump, Green-Naghdi equations, numerical predictions

Procedia PDF Downloads 398
1456 Reimagining Urban Food Security Through Informality Practices: The Case of Street Food Vending in Johannesburg, South Africa

Authors: Blessings Masuku

Abstract:

This study positions itself within the nascent of street food vending that plays a crucial role in addressing urban household food security across the urban landscape of South Africa. The study aimed to understand how various forms of infrastructure systems (i.e., energy, water and sanitation, housing, and transport, among others) intersect with food and urban informality and how vendors and households’ choices and decisions made around food are influenced by infrastructure assemblages. This study noted that most of the literature studies on food security have mainly focused on the rural agricultural sector, with limited attention to urban food security, notably the role of informality practices in addressing urban food insecurity at the household level. This study pays close attention to how informal informality practices such as street food vending can be used as a catalyst to address urban poverty and household food security and steer local economies for sustainable livelihoods of the urban poor who live in the periphery of the city in Johannesburg. This study deconstructs the infrastructure needs of street food vendors, and the aim was to understand how such infrastructure needs intersect with food and policy that governs urban informality practices. The study argues that the decisions and choices of informality actors in the city of Johannesburg are chiefly determined by the assemblages of infrastructure, including regulatory frameworks that govern the informal sector in the city of Johannesburg. A qualitative approach that includes surveys (open-ended questions), archival research (i., e policy and other key document reviews), and key interviews mainly with city officials and informality actors. A thematic analysis was used to analyze the data collected. This study contributes to greater debates on urban studies and burgeoning literature on urban food security in many ways that include Firstly, the pivotal role that the informal food sector, notably street food vending, plays within the urban economy to address urban poverty and household food security, therefore questioning the conservative perspectives that view the informal sector as a hindrance to a ‘modern city’ and an annoyance to ‘modern’ urban spaces. Secondly, this study contributes to the livelihood and coping strategies of the urban poor who, despite harsh and restrictive regulatory frameworks, devise various agentive ways to generate incomes and address urban poverty and food insecurities.

Keywords: urban food security, street food vending, informal food sector, infrastructure systems, livelihood strategies, policy framework and governance

Procedia PDF Downloads 63
1455 Additive Manufacturing of Microstructured Optical Waveguides Using Two-Photon Polymerization

Authors: Leonnel Mhuka

Abstract:

Background: The field of photonics has witnessed substantial growth, with an increasing demand for miniaturized and high-performance optical components. Microstructured optical waveguides have gained significant attention due to their ability to confine and manipulate light at the subwavelength scale. Conventional fabrication methods, however, face limitations in achieving intricate and customizable waveguide structures. Two-photon polymerization (TPP) emerges as a promising additive manufacturing technique, enabling the fabrication of complex 3D microstructures with submicron resolution. Objectives: This experiment aimed to utilize two-photon polymerization to fabricate microstructured optical waveguides with precise control over geometry and dimensions. The objective was to demonstrate the feasibility of TPP as an additive manufacturing method for producing functional waveguide devices with enhanced performance. Methods: A femtosecond laser system operating at a wavelength of 800 nm was employed for two-photon polymerization. A custom-designed CAD model of the microstructured waveguide was converted into G-code, which guided the laser focus through a photosensitive polymer material. The waveguide structures were fabricated using a layer-by-layer approach, with each layer formed by localized polymerization induced by non-linear absorption of the laser light. Characterization of the fabricated waveguides included optical microscopy, scanning electron microscopy, and optical transmission measurements. The optical properties, such as mode confinement and propagation losses, were evaluated to assess the performance of the additive manufactured waveguides. Conclusion: The experiment successfully demonstrated the additive manufacturing of microstructured optical waveguides using two-photon polymerization. Optical microscopy and scanning electron microscopy revealed the intricate 3D structures with submicron resolution. The measured optical transmission indicated efficient light propagation through the fabricated waveguides. The waveguides exhibited well-defined mode confinement and relatively low propagation losses, showcasing the potential of TPP-based additive manufacturing for photonics applications. The experiment highlighted the advantages of TPP in achieving high-resolution, customized, and functional microstructured optical waveguides. Conclusion: his experiment substantiates the viability of two-photon polymerization as an innovative additive manufacturing technique for producing complex microstructured optical waveguides. The successful fabrication and characterization of these waveguides open doors to further advancements in the field of photonics, enabling the development of high-performance integrated optical devices for various applications

Keywords: Additive Manufacturing, Microstructured Optical Waveguides, Two-Photon Polymerization, Photonics Applications

Procedia PDF Downloads 101
1454 Hydrogen Induced Fatigue Crack Growth in Pipeline Steel API 5L X65: A Combined Experimental and Modelling Approach

Authors: H. M. Ferreira, H. Cockings, D. F. Gordon

Abstract:

Climate change is driving a transition in the energy sector, with low-carbon energy sources such as hydrogen (H2) emerging as an alternative to fossil fuels. However, the successful implementation of a hydrogen economy requires an expansion of hydrogen production, transportation and storage capacity. The costs associated with this transition are high but can be partly mitigated by adapting the current oil and natural gas networks, such as pipeline, an important component of the hydrogen infrastructure, to transport pure or blended hydrogen. Steel pipelines are designed to withstand fatigue, one of the most common causes of pipeline failure. However, it is well established that some materials, such as steel, can fail prematurely in service when exposed to hydrogen-rich environments. Therefore, it is imperative to evaluate how defects (e.g. inclusions, dents, and pre-existing cracks) will interact with hydrogen under cyclic loading and, ultimately, to what extent hydrogen induced failure will limit the service conditions of steel pipelines. This presentation will explore how the exposure of API 5L X65 to a hydrogen-rich environment and cyclic loads will influence its susceptibility to hydrogen induced failure. That evaluation will be performed by a combination of several techniques such as hydrogen permeation testing (ISO 17081:2014), fatigue crack growth (FCG) testing (ISO 12108:2018 and AFGROW modelling), combined with microstructural and fractographic analysis. The development of a FCG test setup coupled with an electrochemical cell will be discussed, along with the advantages and challenges of measuring crack growth rates in electrolytic hydrogen environments. A detailed assessment of several electrolytic charging conditions will also be presented, using hydrogen permeation testing as a method to correlate the different charging settings to equivalent hydrogen concentrations and effective diffusivity coefficients, not only on the base material but also on the heat affected zone and weld of the pipelines. The experimental work is being complemented with AFGROW, a useful FCG modelling software that has helped inform testing parameters and which will also be developed to ultimately help industry experts perform structural integrity analysis and remnant life characterisation of pipeline steels under representative conditions. The results from this research will allow to conclude if there is an acceleration of the crack growth rate of API 5L X65 under the influence of a hydrogen-rich environment, an important aspect that needs to be rectified instandards and codes of practice on pipeline integrity evaluation and maintenance.

Keywords: AFGROW, electrolytic hydrogen charging, fatigue crack growth, hydrogen, pipeline, steel

Procedia PDF Downloads 105
1453 Safety Tolerance Zone for Driver-Vehicle-Environment Interactions under Challenging Conditions

Authors: Matjaž Šraml, Marko Renčelj, Tomaž Tollazzi, Chiara Gruden

Abstract:

Road safety is a worldwide issue with numerous and heterogeneous factors influencing it. On the side, driver state – comprising distraction/inattention, fatigue, drowsiness, extreme emotions, and socio-cultural factors highly affect road safety. On the other side, the vehicle state has an important role in mitigating (or not) the road risk. Finally, the road environment is still one of the main determinants of road safety, defining driving task complexity. At the same time, thanks to technological development, a lot of detailed data is easily available, creating opportunities for the detection of driver state, vehicle characteristics and road conditions and, consequently, for the design of ad hoc interventions aimed at improving driver performance, increase awareness and mitigate road risks. This is the challenge faced by the i-DREAMS project. i-DREAMS, which stands for a smart Driver and Road Environment Assessment and Monitoring System, is a 3-year project funded by the European Union’s Horizon 2020 research and innovation program. It aims to set up a platform to define, develop, test and validate a ‘Safety Tolerance Zone’ to prevent drivers from getting too close to the boundaries of unsafe operation by mitigating risks in real-time and after the trip. After the definition and development of the Safety Tolerance Zone concept and the concretization of the same in an Advanced driver-assistance system (ADAS) platform, the system was tested firstly for 2 months in a driving simulator environment in 5 different countries. After that, naturalistic driving studies started for a 10-month period (comprising a 1-month pilot study, 3-month baseline study and 6 months study implementing interventions). Currently, the project team has approved a common evaluation approach, and it is developing the assessment of the usage and outcomes of the i-DREAMS system, which is turning positive insights. The i-DREAMS consortium consists of 13 partners, 7 engineering universities and research groups, 4 industry partners and 2 partners (European Transport Safety Council - ETSC - and POLIS cities and regions for transport innovation) closely linked to transport safety stakeholders, covering 8 different countries altogether.

Keywords: advanced driver assistant systems, driving simulator, safety tolerance zone, traffic safety

Procedia PDF Downloads 67
1452 Destroying the Body for the Salvation of the Soul: A Modern Theological Approach

Authors: Angelos Mavropoulos

Abstract:

Apostle Paul repeatedly mentioned the bodily sufferings that he voluntarily went through for Christ, as his body was in chains for the ‘mystery of Christ’ (Col 4:3), while on his flesh he gladly carried the ‘thorn’ and all his pains and weaknesses, which prevent him from being proud (2 Cor 12:7). In his view, God’s power ‘is made perfect in weakness’ and when we are physically weak, this is when we are spiritually strong (2 Cor 12:9-10). In addition, we all bear the death of Jesus in our bodies so that His life can be ‘revealed in our mortal body’ (2 Cor 4:10-11), and if we indeed share in His sufferings, we will share in His glory as well (Rom 8:17). Based on these passages, several Christian writers projected bodily suffering, pain, death, and martyrdom, in general, as the means to a noble Christian life and the way to attain God. Even more, Christian tradition is full of instances of voluntary self-harm, mortification of the flesh, and body mutilation for the sake of the soul by several pious men and women, as an imitation of Christ’s earthly suffering. It is a fact, therefore, that, for Christianity, he or she who not only endures but even inflicts earthly pains for God is highly appreciated and will be rewarded in the afterlife. Nevertheless, more recently, Gaudium et Spes and Veritatis Splendor decisively and totally overturned the Catholic Church’s view on the matter. The former characterised the practices that violate ‘the integrity of the human person, such as mutilation, torments inflicted on body or mind’ as ‘infamies’ (Gaudium et Spes, 27), while the latter, after confirming that there are some human acts that are ‘intrinsically evil’, that is, they are always wrong, regardless of ‘the ulterior intentions of the one acting and the circumstances’, included in this category, among others, ‘whatever violates the integrity of the human person, such as mutilation, physical and mental torture and attempts to coerce the spirit.’ ‘All these and the like’, the encyclical concludes, ‘are a disgrace… and are a negation of the honour due to the Creator’ (Veritatis Splendor, 80). For the Catholic Church, therefore, willful bodily sufferings and mutilations infringe human integrity and are intrinsically evil acts, while intentional harm, based on the principle that ‘evil may not be done for the sake of good’, is always unreasonable. On the other hand, many saints who engaged in these practices are still honoured for their ascetic and noble life, while, even today, similar practices are found, such as the well-known Good Friday self-flagellation and nailing to the cross, performed in San Fernando, Philippines. So, the viewpoint of modern Theology about these practices and the question of whether Christians should hurt their body for the salvation of their soul is the question that this paper will attempt to answer.

Keywords: human body, human soul, torture, pain, salvation

Procedia PDF Downloads 91
1451 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve

Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick

Abstract:

Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.

Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin

Procedia PDF Downloads 151
1450 Holographic Art as an Approach to Enhance Visual Communication in Egyptian Community: Experimental Study

Authors: Diaa Ahmed Mohamed Ahmedien

Abstract:

Nowadays, it cannot be denied that the most important interactive arts trends have appeared as a result of significant scientific mutations in the modern sciences, and holographic art is not an exception, where it is considered as a one of the most important major contemporary interactive arts trends in visual arts. Holographic technique had been evoked through the modern physics application in late 1940s, for the improvement of the quality of electron microscope images by Denis Gabor, until it had arrived to Margaret Benyon’s art exhibitions, and then it passed through a lot of procedures to enhance its quality and artistic applications technically and visually more over 70 years in visual arts. As a modest extension to these great efforts, this research aimed to invoke extraordinary attempt to enroll sample of normal people in Egyptian community in holographic recording program to record their appreciated objects or antiques, therefore examine their abilities to interact with modern techniques in visual communication arts. So this research tried to answer to main three questions: 'can we use the analog holographic techniques to unleash new theoretical and practical knowledge in interactive arts for public in Egyptian community?', 'to what extent holographic art can be familiar with public and make them able to produce interactive artistic samples?', 'are there possibilities to build holographic interactive program for normal people which lead them to enhance their understanding to visual communication in public and, be aware of interactive arts trends?' This research was depending in its first part on experimental methods, where it conducted in Laser lab at Cairo University, using Nd: Yag Laser 532 nm, and holographic optical layout, with selected samples of Egyptian people that they have been asked to record their appreciated object, after they had already learned recording methods, and in its second part on a lot of discussion panel had conducted to discuss the result and how participants felt towards their holographic artistic products through survey, questionnaires, take notes and critiquing holographic artworks. Our practical experiments and final discussions have already lead us to say that this experimental research was able to make most of participants pass through paradigm shift in their visual and conceptual experiences towards more interaction with contemporary visual arts trends, as an attempt to emphasize to the role of mature relationship between the art, science and technology, to spread interactive arts out in our community through the latest scientific and artistic mutations around the world and the role of this relationship in our societies particularly with those who have never been enrolled in practical arts programs before.

Keywords: Egyptian community, holographic art, laser art, visual art

Procedia PDF Downloads 479
1449 Ionic Liquids-Polymer Nanoparticle Systems as Breakthrough Tools to Improve the Leprosy Treatment

Authors: A. Julio, R. Caparica, S. Costa Lima, S. Reis, J. G. Costa, P. Fonte, T. Santos De Almeida

Abstract:

The Mycobacterium leprae causes a chronic and infectious disease called leprosy, which the most common symptoms are peripheral neuropathy and deformation of several parts of the body. The pharmacological treatment of leprosy is a combined therapy with three different drugs, rifampicin, clofazimine, and dapsone. However, clofazimine and dapsone have poor solubility in water and also low bioavailability. Thus, it is crucial to develop strategies to overcome such drawbacks. The use of ionic liquids (ILs) may be a strategy to overcome the low solubility since they have been used as solubility promoters. ILs are salts, liquid below 100 ºC or even at room temperature, that may be placed in water, oils or hydroalcoholic solutions. Another approach may be the encapsulation of drugs into polymeric nanoparticles, which improves their bioavailability. In this study, two different classes of ILs were used, the imidazole- and the choline-based ionic liquids, as solubility enhancers of the poorly soluble antileprotic drugs. Thus, after the solubility studies, it was developed IL-PLGA nanoparticles hybrid systems to deliver such drugs. First of all, the solubility studies of clofazimine and dapsone were performed in water and in water: IL mixtures, at ILs concentrations where cell viability is maintained, at room temperature for 72 hours. For both drugs, it was observed an improvement on the drug solubility and [Cho][Phe] showed to be the best solubility enhancer, especially for clofazimine, where it was observed a 10-fold improvement. Later, it was produced nanoparticles, with a polymeric matrix of poly(lactic-co-glycolic acid) (PLGA) 75:25, by a modified solvent-evaporation W/O/W double emulsion technique in the presence of [Cho][Phe]. Thus, the inner phase was an aqueous solution of 0.2 % (v/v) of the above IL with each drug to its maximum solubility determined on the previous study. After the production, the nanosystem hybrid was physicochemically characterized. The produced nanoparticles had a diameter of around 580 nm and 640 nm, for clofazimine and dapsone, respectively. Regarding the polydispersity index, it was in agreement of the recommended value of this parameter for drug delivery systems (around 0.3). The association efficiency (AE) of the developed hybrid nanosystems demonstrated promising AE values for both drugs, given their low solubility (64.0 ± 4.0 % for clofazimine and 58.6 ± 10.0 % for dapsone), that prospects the capacity of these delivery systems to enhance the bioavailability and loading of clofazimine and dapsone. Overall, the study achievement may signify an upgrading of the patient’s quality of life, since it may mean a change in the therapeutic scheme, not requiring doses of drug so high to obtain a therapeutic effect. The authors would like to thank Fundação para a Ciência e a Tecnologia, Portugal (FCT/MCTES (PIDDAC), UID/DTP/04567/2016-CBIOS/PRUID/BI2/2018).

Keywords: ionic liquids, ionic liquids-PLGA nanoparticles hybrid systems, leprosy treatment, solubility

Procedia PDF Downloads 150
1448 Neuromyelitis Optica area Postrema Syndrome(NMOSD-APS) in a Fifteen-year-old Girl: A Case Report

Authors: Merilin Ivanova Ivanova, Kalin Dimitrov Atanasov, Stefan Petrov Enchev

Abstract:

Backgroud: Neuromyelitis optica spectrum disorder, also known as Devic’s disease, is a relapsing demyelinating autoimmune inflammatory disorder of the central nervous system associated with anti-aquaporin 4 (AQP4) antibodies that can manifest with devastating secondary neurological deficits. Most commonly affected are the optic nerves and the spinal cord-clinically this is often presented with optic neuritis (loss of vision), transverse myelitis(weakness or paralysis of extremities),lack of bladder and bowel control, numbness. APS is a core clinical entity of NMOSD and adds to the clinical representation the following symptoms: intractable nausea, vomiting and hiccup, it usually occurs isolated at onset, and can lead to a significant delay in the diagnosis. The condition may have features similar to multiple sclerosis (MS) but the episodes are worse in NMO and it is treated differently. It could be relapsing or monophasic. Possible complications are visual field defects and motor impairment, with potential blindness and irreversible motor deficits. In severe cases, myogenic respiratory failure ensues. The incidence of reported cases is approximately 0.3–4.4 per 100,000. Paediatric cases of NMOSD are rare but have been reported occasionally, comprising less than 5% of the reported cases. Objective: The case serves to show the difficulty when it comes to the diagnostic processes regarding a rare autoimmune disease with non- specific symptoms, taking large interval of rimes to reveal as complete clinical manifestation of the aforementioned syndrome, as well as the necessity of multidisciplinary approach in the setting of а general paediatric department in аn emergency hospital. Methods: itpatient's history, clinical presentation, and information from the used diagnostic tools(MRI with contrast of the central nervous system) lead us to the conclusion .This was later on confirmed by the positive results from the anti-aquaporin 4 (AQP4) antibody serology test. Conclusion: APS is a common symptom of NMOSD and is considered a challenge in a differential-diagnostic plan. Gaining an increased awareness of this disease/syndrome, obtaining a detailed patient history, and performing thorough physical examinations are essential if we are to reduce and avoid misdiagnosis.

Keywords: neuromyelitis, devic's disease, hiccup, autoimmune, MRI

Procedia PDF Downloads 39
1447 Educational Debriefing in Prehospital Medicine: A Qualitative Study Exploring Educational Debrief Facilitation and the Effects of Debriefing

Authors: Maria Ahmad, Michael Page, Danë Goodsman

Abstract:

‘Educational’ debriefing – a construct distinct from clinical debriefing – is used following simulated scenarios and is central to learning and development in fields ranging from aviation to emergency medicine. However, little research into educational debriefing in prehospital medicine exists. This qualitative study explored the facilitation and effects of prehospital educational debriefing and identified obstacles to debriefing, using the London’s Air Ambulance Pre-Hospital Care Course (PHCC) as a model. Method: Ethnographic observations of moulages and debriefs were conducted over two consecutive days of the PHCC in October 2019. Detailed contemporaneous field notes were made and analysed thematically. Subsequently, seven one-to-one, semi-structured interviews were conducted with four PHCC debrief facilitators and three course participants to explore their experiences of prehospital educational debriefing. Interview data were manually transcribed and analysed thematically. Results: Four overarching themes were identified: the approach to the facilitation of debriefs, effects of debriefing, facilitator development, and obstacles to debriefing. The unpredictable debriefing environment was seen as both hindering and paradoxically benefitting educational debriefing. Despite using varied debriefing structures, facilitators emphasised similar key debriefing components, including exploring participants’ reasoning and sharing experiences to improve learning and prevent future errors. Debriefing was associated with three principal effects: releasing emotion; learning and improving, particularly participant compound learning as they progressed through scenarios; and the application of learning to clinical practice. Facilitator training and feedback were central to facilitator learning and development. Several obstacles to debriefing were identified, including mismatch of participant and facilitator agendas, performance pressure, and time. Interestingly, when used appropriately in the educational environment, these obstacles may paradoxically enhance learning. Conclusions: Educational debriefing in prehospital medicine is complex. It requires the establishment of a safe learning environment, an understanding of participant agendas, and facilitator experience to maximise participant learning. Aspects unique to prehospital educational debriefing were identified, notably the unpredictable debriefing environment, interdisciplinary working, and the paradoxical benefit of educational obstacles for learning. This research also highlights aspects of educational debriefing not extensively detailed in the literature, such as compound participant learning, display of ‘professional honesty’ by facilitators, and facilitator learning, which require further exploration. Future research should also explore educational debriefing in other prehospital services.

Keywords: debriefing, prehospital medicine, prehospital medical education, pre-hospital care course

Procedia PDF Downloads 217
1446 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 288
1445 Evaluation of Occupational Doses in Interventional Radiology

Authors: Fernando Antonio Bacchim Neto, Allan Felipe Fattori Alves, Maria Eugênia Dela Rosa, Regina Moura, Diana Rodrigues De Pina

Abstract:

Interventional Radiology is the radiology modality that provides the highest dose values to medical staff. Recent researches show that personal dosimeters may underestimate dose values in interventional physicians, especially in extremities (hands and feet) and eye lens. The aim of this work was to study radiation exposure levels of medical staff in different interventional radiology procedures and estimate the annual maximum numbers of procedures (AMN) that each physician could perform without exceed the annual limits of dose established by normative. For this purpose LiF:Mg,Ti (TLD-100) dosimeters were positioned in different body regions of the interventional physician (eye lens, thyroid, chest, gonads, hand and foot) above the radiological protection vests as lead apron and thyroid shield. Attenuation values for lead protection vests were based on international guidelines. Based on these data were chosen as 90% attenuation of the lead vests and 60% attenuation of the protective glasses. 25 procedures were evaluated: 10 diagnostics, 10 angioplasty, and 5-aneurysm treatment. The AMN of diagnostic procedures was 641 for the primary interventional radiologist and 930 for the assisting interventional radiologist. For the angioplasty procedures, the AMN for primary interventional radiologist was 445 and for assisting interventional radiologist was 1202. As for the procedures of aneurism treatment, the AMN for the primary interventional radiologist was 113 and for the assisting interventional radiologist were 215. All AMN were limited by the eye lens doses already considering the use of protective glasses. In all categories evaluated, the higher dose values are found in gonads and in the lower regions of professionals, both for the primary interventionist and for the assisting, but the eyes lens dose limits are smaller than these regions. Additional protections as mobile barriers, which can be positioned between the interventionist and the patient, can decrease the exposures in the eye lens, providing a greater protection for the medical staff. The alternation of professionals to perform each type of procedure can reduce the dose values received by them over a period. The analysis of dose profiles proposed in this work showed that personal dosimeters positioned in chest might underestimate dose values in other body parts of the interventional physician, especially in extremities and eye lens. As each body region of the interventionist is subject to different levels of exposure, dose distribution in each region provides a better approach to what actions are necessary to ensure the radiological protection of medical staff.

Keywords: interventional radiology, radiation protection, occupationally exposed individual, hemodynamic

Procedia PDF Downloads 393
1444 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience

Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi

Abstract:

Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.

Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit

Procedia PDF Downloads 129
1443 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method

Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare

Abstract:

The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.

Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test

Procedia PDF Downloads 121
1442 A Comparative Analysis of Liberation and Contemplation in Sankara and Aquinas

Authors: Zeite Shumneiyang Koireng

Abstract:

Liberation is the act of liberating or the state of being liberated. Indian philosophy, in general, understands liberation as moksa, which etymological is derived from the Sanskrit root muc+ktin meaning to loose, set free, to let go, discharge, release, liberate, deliver, etc. According to Indian schools of thought, moksa is the highest value on realizing which nothing remains to be realized. It is the cessation of birth and death, all kinds of pain and at the same time, it is the realization of one’s own self. Sankara’s Advaita philosophy is based on the following propositions: Brahman is the only Reality; the world has apparent reality, and the soul is not different from Brahman. According to Sankara, Brahman is the basis on which the world form appears; it is the sustaining ground of all various modification. It is the highest self and the self of all reveals himself by dividing himself [ as it was in the form of various objects] in multiple ways. The whole world is the manifestation of the Supreme Being. Brahman modifying itself into the Atman or internal self of all things is the world. Since Brahman is the Upadhana karana of the world, the sruti speaks of the world as the modification of Brahman into the Atman of the effect. Contemplation as the fulfillment of man finds a radical foundation in Aquinas teaching concerning the natural end or as he also referred to it, natural desire. The third book of the Summa Contra Gentiles begins the study of happiness with a consideration of natural desire. According to him, all creatures, even those devoid of understanding are ordered to God as an ultimate end. Intrinsically, a part of every nature is a tendency or inclination, originating in the natural form and tendency toward the end for which the possessor of nature exists. It is the study of the nature and finality of inclination that Aquinas establishes through an argument of induction man’s Contemplation of God as the fulfillment of his nature. The present paper is attempted to critically approach two important, seminal and originated thought, representing Indian and Western traditions which mark on the thinking of their respective times. Both these thoughts- Advaitic concept of Liberation in the Indian tradition and the concept of Contemplation in Thomas Aquinas’ Summa Contra Gentiles’- confront directly the question of the ultimate meaning of human existence. According to Sankara, it is knowledge and knowledge alone which is the means of moksa and the highest knowledge is moksa itself. Liberation in Sankara Vedanta is attained as a process of purification of self, which gradually and increasingly turns into purer and purer intentional construction. Man’s inner natural tendency for Aquinas is towards knowledge. The human subject is driven to know more and more about reality and in particular about the highest reality. Contemplation of this highest reality is fulfillment in the philosophy of Aquinas. Rather, Contemplation is the perfect activity in man’s present state of existence.

Keywords: liberation, Brahman, contemplation, fulfillment

Procedia PDF Downloads 193
1441 Combining the Production of Radiopharmaceuticals with the Department of Radionuclide Diagnostics

Authors: Umedov Mekhroz, Griaznova Svetlana

Abstract:

In connection with the growth of oncological diseases, the design of centers for diagnostics and the production of radiopharmaceuticals is the most relevant area of healthcare facilities. The design of new nuclear medicine centers should be carried out from the standpoint of solving the following tasks: the availability of medical care, functionality, environmental friendliness, sustainable development, improving the safety of drugs, the use of which requires special care, reducing the rate of environmental pollution, ensuring comfortable conditions for the internal microclimate, adaptability. The purpose of this article is to substantiate architectural and planning solutions, formulate recommendations and principles for the design of nuclear medicine centers and determine the connections between the production and medical functions of a building. The advantages of combining the production of radiopharmaceuticals and the department of medical care: less radiation activity is accumulated, the cost of the final product is lower, and there is no need to hire a transport company with a special license for transportation. A medical imaging department is a structural unit of a medical institution in which diagnostic procedures are carried out in order to gain an idea of the internal structure of various organs of the body for clinical analysis. Depending on the needs of a particular institution, the department may include various rooms that provide medical imaging using radiography, ultrasound diagnostics, and the phenomenon of nuclear magnetic resonance. The production of radiopharmaceuticals is an object intended for the production of a pharmaceutical substance containing a radionuclide and intended for introduction into the human body or laboratory animal for the purpose of diagnosis, evaluation of the effectiveness of treatment, or for biomedical research. The research methodology includes the following subjects: study and generalization of international experience in scientific research, literature, standards, teaching aids, and design materials on the topic of research; An integrated approach to the study of existing international experience of PET / CT scan centers and the production of radiopharmaceuticals; Elaboration of graphical analysis and diagrams based on the system analysis of the processed information; Identification of methods and principles of functional zoning of nuclear medicine centers. The result of the research is the identification of the design principles of nuclear medicine centers with the functions of the production of radiopharmaceuticals and the department of medical imaging. This research will be applied to the design and construction of healthcare facilities in the field of nuclear medicine.

Keywords: architectural planning solutions, functional zoning, nuclear medicine, PET/CT scan, production of radiopharmaceuticals, radiotherapy

Procedia PDF Downloads 89
1440 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps

Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo

Abstract:

With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.

Keywords: interactive applications, power management, QoS, Web apps, WebGL

Procedia PDF Downloads 192
1439 Pareto Optimal Material Allocation Mechanism

Authors: Peter Egri, Tamas Kis

Abstract:

Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.

Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling

Procedia PDF Downloads 333
1438 Nutrition Transition in Bangladesh: Multisectoral Responsiveness of Health Systems and Innovative Measures to Mobilize Resources Are Required for Preventing This Epidemic in Making

Authors: Shusmita Khan, Shams El Arifeen, Kanta Jamil

Abstract:

Background: Nutrition transition in Bangladesh has progressed across various relevant socio-demographic contextual issues. For a developing country like Bangladesh, its is believed that, overnutrition is less prevalent than undernutrition. However, recent evidence suggests that a rapid shift is taking place where overweight is subduing underweight. With this rapid increase, for Bangladesh, it will be challenging to achieve the global agenda on halting overweight and obesity. Methods: A secondary analysis was performed from six successive national demographic and health surveys to get the trend on undernutrition and overnutrition for women from reproductive age. In addition, national relevant policy papers were reviewed to determine the countries readiness for whole of the systems approach to tackle this epidemic. Results: Over the last decade, the proportion of women with low body mass index (BMI<18.5), an indicator of undernutrition, has decreased markedly from 34% to 19%. However, the proportion of overweight women (BMI ≥25) increased alarmingly from 9% to 24% over the same period. If the WHO cutoff for public health action (BMI ≥23) is used, the proportion of overweight women has increased from 17% in 2004 to 39% in 2014. The increasing rate of obesity among women is a major challenge to obstetric practice for both women and fetuses. In the long term, overweight women are also at risk of future obesity, diabetes, hyperlipidemia, hypertension, and heart disease. These diseases have serious impact on health care systems. Costs associated with overweight and obesity involves direct and indirect costs. Direct costs include preventive, diagnostic, and treatment services related to obesity. Indirect costs relate to morbidity and mortality costs including productivity. Looking at the Bangladesh Health Facility Survey, it is found that the country is bot prepared for providing nutrition-related health services, regarding prevention, screening, management and treatment. Therefore, if this nutrition transition is not addressed properly, Bangladesh will not be able to achieve the target of the NCD global monitoring framework of the WHO. Conclusion: Addressing this nutrition transition requires contending ‘malnutrition in all its forms’ and addressing it with integrated approaches. Whole of the systems action is required at all levels—starting from improving multi-sectoral coordination to scaling up nutrition-specific and nutrition-sensitive mainstreamed interventions keeping health system in mind.

Keywords: nutrition transition, Bangladesh, health system, undernutrition, overnutrition, obesity

Procedia PDF Downloads 286
1437 A Theoretical Framework of Patient Autonomy in a High-Tech Care Context

Authors: Catharina Lindberg, Cecilia Fagerstrom, Ania Willman

Abstract:

Patients in high-tech care environments are usually dependent on both formal/informal caregivers and technology, highlighting their vulnerability and challenging their autonomy. Autonomy presumes that a person has education, experience, self-discipline and decision-making capacity. Reference to autonomy in relation to patients in high-tech care environments could, therefore, be considered paradoxical, as in most cases these persons have impaired physical and/or metacognitive capacity. Therefore, to understand the prerequisites for patients to experience autonomy in high-tech care environments and to support them, there is a need to enhance knowledge and understanding of the concept of patient autonomy in this care context. The development of concepts and theories in a practice discipline such as nursing helps to improve both nursing care and nursing education. Theoretical development is important when clarifying a discipline, hence, a theoretical framework could be of use to nurses in high-tech care environments to support and defend the patient’s autonomy. A meta-synthesis was performed with the intention to be interpretative and not aggregative in nature. An amalgamation was made of the results from three previous studies, carried out by members of the same research group, focusing on the phenomenon of patient autonomy from a patient perspective within a caring context. Three basic approaches to theory development: derivation, synthesis, and analysis provided an operational structure that permitted the researchers to move back and forth between these approaches during their work in developing a theoretical framework. The results from the synthesis delineated that patient autonomy in a high-tech care context is: To be in control though trust, co-determination, and transition in everyday life. The theoretical framework contains several components creating the prerequisites for patient autonomy. Assumptions and propositional statements that guide theory development was also outlined, as were guiding principles for use in day-to-day nursing care. Four strategies used by patients to remain or obtain patient autonomy in high-tech care environments were revealed: the strategy of control, the strategy of partnership, the strategy of trust, and the strategy of transition. This study suggests an extended knowledge base founded on theoretical reasoning about patient autonomy, providing an understanding of the strategies used by patients to achieve autonomy in the role of patient, in high-tech care environments. When possessing knowledge about the patient perspective of autonomy, the nurse/carer can avoid adopting a paternalistic or maternalistic approach. Instead, the patient can be considered to be a partner in care, allowing care to be provided that supports him/her in remaining/becoming an autonomous person in the role of patient.

Keywords: autonomy, caring, concept development, high-tech care, theory development

Procedia PDF Downloads 207
1436 The Ongoing Impact of Secondary Stressors on Businesses in Northern Ireland Affected by Flood Events

Authors: Jill Stephenson, Marie Vaganay, Robert Cameron, Caoimhe McGurk, Neil Hewitt

Abstract:

Purpose: The key aim of the research was to identify the secondary stressors experienced by businesses affected by single or repeated flooding and to determine to what extent businesses were affected by these stressors, along with any resulting impact on health. Additionally, the research aimed to establish the likelihood of businesses being re-exposed to the secondary stressors through assessing awareness of flood risk, implementation of property protection measures and level of community resilience. Design/methodology/approach: The chosen research method involved the distribution of a questionnaire survey to businesses affected by either single or repeated flood events. The questionnaire included the Impact of Event Scale (a 15-item self-report measure which assesses subjective distress caused by traumatic events). Findings: 55 completed questionnaires were returned by flood impacted businesses. 89% of the businesses had sustained internal flooding while 11% had experienced external flooding. The results established that the key secondary stressors experienced by businesses, in order of priority, were: flood damage, fear of reoccurring flooding, prevention of access to the premise/closure, loss of income, repair works, length of closure and insurance issues. There was a lack of preparedness for potential future floods and consequent vulnerability to the emergence of secondary stressors among flood affected businesses, as flood resistance or flood resilience measures had only been implemented by 11% and 13% respectively. In relation to the psychological repercussions, the Impact of Event scores suggested that potential prevalence of post-traumatic stress disorder (PTSD) was noted among 8 out of 55 respondents (l5%). Originality/value: The results improve understanding of the enduring repercussions of flood events on businesses, indicating that not only residents may be susceptible to the detrimental health impacts of flood events and single flood events may be just as likely as reoccurring flooding to contribute to ongoing stress. Lack of financial resources is a possible explanation for the lack of implementation of property protection measures among businesses, despite 49% experiencing flooding on multiple occasions. Therefore it is recommended that policymakers should consider potential sources of financial support or grants towards flood defences for flood impacted businesses. Any form of assistance should be made available to businesses at the earliest opportunity as there was no significant association between the time of the last flood event and the likelihood of experiencing PTSD symptoms.

Keywords: flood event, flood resilience, flood resistance, PTSD, secondary stressors

Procedia PDF Downloads 430
1435 On Early Verb Acquisition in Chinese-Speaking Children

Authors: Yating Mu

Abstract:

Young children acquire native language with amazing rapidity. After noticing this interesting phenomenon, lots of linguistics, as well as psychologists, devote themselves to exploring the best explanations. Thus researches on first language acquisition emerged. Early lexical development is an important branch of children’s FLA (first language acquisition). Verb, the most significant class of lexicon, the most grammatically complex syntactic category or word type, is not only the core of exploring syntactic structures of language but also plays a key role in analyzing semantic features. Obviously, early verb development must have great impacts on children’s early lexical acquisition. Most scholars conclude that verbs, in general, are very difficult to learn because the problem in verb learning might be more about mapping a specific verb onto an action or event than about learning the underlying relational concepts that the verb or relational term encodes. However, the previous researches on early verb development mainly focus on the argument about whether there is a noun-bias or verb-bias in children’s early productive vocabulary. There are few researches on general characteristics of children’s early verbs concerning both semantic and syntactic aspects, not mentioning a general survey on Chinese-speaking children’s verb acquisition. Therefore, the author attempts to examine the general conditions and characteristics of Chinese-speaking children’s early productive verbs, based on data from a longitudinal study on three Chinese-speaking children. In order to present an overall picture of Chinese verb development, both semantic and syntactic aspects will be focused in the present study. As for semantic analysis, a classification method is adopted first. Verb category is a sophisticated class in Mandarin, so it is quite necessary to divide it into small sub-types, thus making the research much easier. By making a reasonable classification of eight verb classes on basis of semantic features, the research aims at finding out whether there exist any universal rules in Chinese-speaking children’s verb development. With regard to the syntactic aspect of verb category, a debate between nativist account and usage-based approach has lasted for quite a long time. By analyzing the longitudinal Mandarin data, the author attempts to find out whether the usage-based theory can fully explain characteristics in Chinese verb development. To sum up, this thesis attempts to apply the descriptive research method to investigate the acquisition and the usage of Chinese-speaking children’s early verbs, on purpose of providing a new perspective in investigating semantic and syntactic features of early verb acquisition.

Keywords: Chinese-speaking children, early verb acquisition, verb classes, verb grammatical structures

Procedia PDF Downloads 366
1434 Analyzing Competitive Advantage of Internet of Things and Data Analytics in Smart City Context

Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue

Abstract:

The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic hasnot only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of the normal design, construction, and operation of cities provides a unique opportunity to improve connection between people. Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the researchcontribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create a competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.

Keywords: internet of things, data analytics, smart cities, competitive advantage

Procedia PDF Downloads 94
1433 Transport Mode Selection under Lead Time Variability and Emissions Constraint

Authors: Chiranjit Das, Sanjay Jharkharia

Abstract:

This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.

Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection

Procedia PDF Downloads 434
1432 Stimulus-Response and the Innateness Hypothesis: Childhood Language Acquisition of “Genie”

Authors: Caroline Kim

Abstract:

Scholars have long disputed the relationship between the origins of language and human behavior. Historically, behaviorist psychologist B. F. Skinner argued that language is one instance of the general stimulus-response phenomenon that characterizes the essence of human behavior. Another, more recent approach argues, by contrast, that language is an innate cognitive faculty and does not arise from behavior, which might develop and reinforce linguistic facility but is not its source. Pinker, among others, proposes that linguistic defects arise from damage to the brain, both congenital and acquired in life. Much of his argument is based on case studies in which damage to the Broca’s and Wernicke’s areas of the brain results in loss of the ability to produce coherent grammatical expressions when speaking or writing; though affected speakers often utter quite fluent streams of sentences, the words articulated lack discernible semantic content. Pinker concludes on this basis that language is an innate component of specific, classically language-correlated regions of the human brain. Taking a notorious 1970s case of linguistic maladaptation, this paper queries the dominant materialist paradigm of language-correlated regions. Susan “Genie” Wiley was physically isolated from language interaction in her home and beaten by her father when she attempted to make any sort of sound. Though without any measurable resulting damage to the brain, Wiley was never able to develop the level of linguistic facility normally achieved in adulthood. Having received a negative reinforcement of language acquisition from her father and lacking the usual language acquisition period, in adulthood Wiley was able to develop language only at a quite limited level in later life. From a contemporary behaviorist perspective, this case confirms the possibility of language deficiency without brain pathology. Wiley’s potential language-determining areas in the brain were intact, and she was exposed to language later in her life, but she was unable to achieve the normal level of communication skills, deterring socialization. This phenomenon and others like it in the case limited literature on linguistic maladaptation pose serious clinical, scientific, and indeed philosophical difficulties for both of the major competing theories of language acquisition, innateness, and linguistic stimulus-response. The implications of such cases for future research in language acquisition are explored, with a particular emphasis on the interaction of innate capacity and stimulus-based development in early childhood.

Keywords: behaviorism, innateness hypothesis, language, Susan "Genie" Wiley

Procedia PDF Downloads 292
1431 A Deep Dive into the Multi-Pronged Nature of Student Engagement

Authors: Rosaline Govender, Shubnam Rambharos

Abstract:

Universities are, to a certain extent, the source of under-preparedness ideologically, structurally, and pedagogically, particularly since organizational cultures often alienate students by failing to enable epistemological access. This is evident in the unsustainably low graduation rates that characterize South African higher education, which indicate that under 30% graduate in minimum time, under two-thirds graduate within 6 years, and one-third have not graduated after 10 years. Although the statistics for the Faculty of Accounting and Informatics at the Durban University of Technology (DUT) in South Africa have improved significantly from 2019 to 2021, the graduation (32%), throughput (50%), and dropout rates (16%) are still a matter for concern as the graduation rates, in particular, are quite similar to the national statistics. For our students to succeed, higher education should take a multi-pronged approach to ensure student success, and student engagement is one of the ways to support our students. Student engagement depends not only on students’ teaching and learning experiences but, more importantly, on their social and academic integration, their sense of belonging, and their emotional connections in the institution. Such experiences need to challenge students academically and engage their intellect, grow their communication skills, build self-discipline, and promote confidence. The aim of this mixed methods study is to explore the multi-pronged nature of student success within the Faculty of Accounting and Informatics at DUT and focuses on the enabling and constraining factors of student success. The sources of data were the Mid-year student experience survey (N=60), the Hambisa Student Survey (N=85), and semi structured focus group interviews with first, second, and third year students of the Faculty of Accounting and Informatics Hambisa program. The Hambisa (“Moving forward”) focus area is part of the Siyaphumelela 2.0 project at DUT and seeks to understand the multiple challenges that are impacting student success which create a large “middle” cohort of students that are stuck in transition within academic programs. Using the lens of the sociocultural influences on student engagement framework, we conducted a thematic analysis of the two surveys and focus group interviews. Preliminary findings indicate that living conditions, choice of program, access to resources, motivation, institutional support, infrastructure, and pedagogical practices impact student engagement and, thus, student success. It is envisaged that the findings from this project will assist the university in being better prepared to enable student success.

Keywords: social and academic integration, socio-cultural influences, student engagement, student success

Procedia PDF Downloads 73
1430 Rural Tourism in Indian Himalayan Region: A Scope for Sustainable Livelihood

Authors: Rommila Chandra, Harshika Choudhary

Abstract:

The present-day tourism sector is globally developing at a fast pace, searching for new ideas and new venues. In the Indian Himalayan Region (IHR), tourism has experienced a vast growth and continuous diversification over the last few years, thus becoming one of the fastest-growing economic sectors in India. With its majestic landscape, high peaks, rich floral and faunal diversity, and cultural history, the IHR has continuously attracted tourists and pilgrims from across the globe. The IHR has attracted a vast range of visitors who seek adventure sports, natural and spiritual solace, peace, cultural assets, food, and festivals, etc. Thus, the multi-functionality of the region has turned tourism into a key component of economic growth for the rural communities in the hills. For the local mountain people, it means valuable economic opportunity for income generation, and for the government and entrepreneurs, it brings profits. As the urban cities gain attention and investment in India, efforts have to be made to protect, safeguard, and strengthen the cultural, spiritual, and natural heritage of IHR for sustainable livelihood development. Furthermore, the socio-economic and environmental insecurities, along with geographical isolation, adds to the challenging survival in the tough terrains of IHR, creating a major threat of outmigration, land abandonment, and degradation. The question the paper intends to answer is: whether the rural community of IHR is aware of the new global trends in rural tourism and the extent of their willingness to adapt to the evolving tourism industry, which impacts the rural economy, including sustainable livelihood opportunity. The objective of the paper is to discuss the integrated nature of rural tourism, which widely depends upon natural resources, cultural heritage, agriculture/horticulture, infrastructural development, education, social awareness, and willingness of the locals. The sustainable management of all these different rural activities can lead to long-term livelihood development and social upliftment. It highlights some gap areas and recommends fewcommunity-based coping measures which the local people can adopt amidst the disorganized sector of rural tourism. Lastly, the main contribution is the exploratory research of the rural tourism vulnerability in the IHR, which would further help in studying the resilience of the tourism sector in the rural parts of a developing nation.

Keywords: community-based approach, sustainable livelihood development, Indian Himalayan region, rural tourism

Procedia PDF Downloads 140