Search results for: automated delivery cabinets
253 Prevalence and Influencing Factors of Type 2 Diabetes among Obese Patients (Diabesity) among Patients Attending Selected Healthcare Facilities in Calabar, Nigeria
Authors: Anietie J. Atangwho, Udeme E. Asibong, Item J. Atangwho, Ndifreke E. Udonwa
Abstract:
Diabesity, a syndrome where diabetes and obesity occur simultaneously in a single patient, has emerged as a recent challenge to the medical world and is already at epidemic proportion in some countries. Therefore, this study aimed to determine the prevalence of diabesity among adult patients attending the General Outpatient clinic of three healthcare facilities in Calabar in a bid to improve healthcare delivery to patients at risk. A cross-sectional descriptive study design was employed using a mixed method approach that comprised quantitative and qualitative components i.e., Focused Group Discussion (FGD) and Key Informant Interview (KII). One hundred and ninety (190) participants aged 18 to 72 years and body mass index (BMI) ≥ 30kg/m2 were recruited as the study population for the quantitative study using systematic random sampling technique and analysed using SPSS version 25. The qualitative component performed 4 FGDs and 3 KIIs. Results of sociodemographic variables showed respondents aged 35 – 44 as highest in number (37.3%). Of this number, 83.7% were females, 76.8% married, and 3.7% earned USD1,110.00 monthly. Whereas majority of the participants (65.8 %) were within class 1 obesity, only 38% considered themselves obese. Diabesity occurrence was found to be 12.6% (i.e. BMI ≥ 30 to 45.2kg/m2 vs FBS ≥ 7.0 – 14.8mmo/l), with 38% of them being previously undiagnosed. About 48.4 % of the respondents ate two meals only per day; with 90.5% eating between meals. Snacking was predominant, mostly pastries (67.9%), with 58.9% taking cola drinks alongside. Sixty-one percent participated in one form of exercise or the other, with walking/trekking as the most common; 34.4 % had no regular exercise schedule. Only about 39.5% of the participants spent less than an hour on devices like phone, television, and laptops. Additionally, previously known and newly diagnosed hypertensive patients were 27.9% and 7.2%, respectively. Qualitative assessment with KII and FGDs showed eating unhealthy diets and lack of exercise as major factors responsible for diabesity. The bivariate analysis revealed significant association between diabesity with marital status and hypertension (p = 0.007 and p = 0.005, respectively). Also, positive association with diabesity were eating snacking (p = 0.017) and number of times a respondent snacks per day (p = 0.035). Overall, the study has revealed the occurrence of diabesity in Calabar at 12.6 % of the study population, with 38 % of them previously undiagnosed; it identified unhealthy diets and lack of exercise as causative factors as well as hypertension as snacking associatory indicators of diabesity.Keywords: diabesity, obesity, diabetes, unhealthy diet
Procedia PDF Downloads 79252 Consumption and Diffusion Based Model of Tissue Organoid Development
Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Simakov
Abstract:
In vitro organoid cultivation requires the simultaneous provision of necessary vascularization and nutrients perfusion of cells during organoid development. However, many aspects of this problem are still unsolved. The functionality of vascular network intergrowth is limited during early stages of organoid development since a function of the vascular network initiated on final stages of in vitro organoid cultivation. Therefore, a microchannel network should be created in early stages of organoid cultivation in hydrogel matrix aimed to conduct and maintain minimally required the level of nutrients perfusion for all cells in the expanding organoid. The network configuration should be designed properly in order to exclude hypoxic and necrotic zones in expanding organoid at all stages of its cultivation. In vitro vascularization is currently the main issue within the field of tissue engineering. As perfusion and oxygen transport have direct effects on cell viability and differentiation, researchers are currently limited only to tissues of few millimeters in thickness. These limitations are imposed by mass transfer and are defined by the balance between the metabolic demand of the cellular components in the system and the size of the scaffold. Current approaches include growth factor delivery, channeled scaffolds, perfusion bioreactors, microfluidics, cell co-cultures, cell functionalization, modular assembly, and in vivo systems. These approaches may improve cell viability or generate capillary-like structures within a tissue construct. Thus, there is a fundamental disconnect between defining the metabolic needs of tissue through quantitative measurements of oxygen and nutrient diffusion and the potential ease of integration into host vasculature for future in vivo implantation. A model is proposed for growth prognosis of the organoid perfusion based on joint simulations of general nutrient diffusion, nutrient diffusion to the hydrogel matrix through the contact surfaces and microchannels walls, nutrient consumption by the cells of expanding organoid, including biomatrix contraction during tissue development, which is associated with changed consumption rate of growing organoid cells. The model allows computing effective microchannel network design giving minimally required the level of nutrients concentration in all parts of growing organoid. It can be used for preliminary planning of microchannel network design and simulations of nutrients supply rate depending on the stage of organoid development.Keywords: 3D model, consumption model, diffusion, spheroid, tissue organoid
Procedia PDF Downloads 308251 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario
Authors: Dipankar Saha, J. P. Singh, C. B. Pandey
Abstract:
Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface
Procedia PDF Downloads 229250 Clubhouse: A Minor Rebellion against the Algorithmic Tyranny of the Majority
Authors: Vahid Asadzadeh, Amin Ataee
Abstract:
Since the advent of social media, there has been a wave of optimism among researchers and civic activists about the influence of virtual networks on the democratization process, which has gradually waned. One of the lesser-known concerns is how to increase the possibility of hearing the voices of different minorities. According to the theory of media logic, the media, using their technological capabilities, act as a structure through which events and ideas are interpreted. Social media, through the use of the learning machine and the use of algorithms, has formed a kind of structure in which the voices of minorities and less popular topics are lost among the commotion of the trends. In fact, the recommended systems and algorithms used in social media are designed to help promote trends and make popular content more popular, and content that belongs to minorities is constantly marginalized. As social networks gradually play a more active role in politics, the possibility of freely participating in the reproduction and reinterpretation of structures in general and political structures in particular (as Laclau and Mouffe had in mind) can be considered as criteria to democracy in action. The point is that the media logic of virtual networks is shaped by the rule and even the tyranny of the majority, and this logic does not make it possible to design a self-foundation and self-revolutionary model of democracy. In other words, today's social networks, though seemingly full of variety But they are governed by the logic of homogeneity, and they do not have the possibility of multiplicity as is the case in immanent radical democracies (influenced by Gilles Deleuze). However, with the emergence and increasing popularity of Clubhouse as a new social media, there seems to be a shift in the social media space, and that is the diminishing role of algorithms and systems reconditioners as content delivery interfaces. This has led to the fact that in the Clubhouse, the voices of minorities are better heard, and the diversity of political tendencies manifests itself better. The purpose of this article is to show, first, how social networks serve the elimination of minorities in general, and second, to argue that the media logic of social networks must adapt to new interpretations of democracy that give more space to minorities and human rights. Finally, this article will show how the Clubhouse serves the new interpretations of democracy at least in a minimal way. To achieve the mentioned goals, in this article by a descriptive-analytical method, first, the relation between media logic and postmodern democracy will be inquired. The political economy popularity in social media and its conflict with democracy will be discussed. Finally, it will be explored how the Clubhouse provides a new horizon for the concepts embodied in radical democracy, a horizon that more effectively serves the rights of minorities and human rights in general.Keywords: algorithmic tyranny, Clubhouse, minority rights, radical democracy, social media
Procedia PDF Downloads 145249 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng
Abstract:
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.Keywords: electrohysterogram, feature, preterm labor, term labor
Procedia PDF Downloads 571248 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method
Authors: Dangut Maren David, Skaf Zakwan
Abstract:
Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.Keywords: prognostics, data-driven, imbalance classification, deep learning
Procedia PDF Downloads 174247 Testing of Canadian Integrated Healthcare and Social Services Initiatives with an Evidence-Based Case Definition for Healthcare and Social Services Integrations
Authors: S. Cheng, C. Catallo
Abstract:
Introduction: Canada's healthcare and social services systems are failing high risk, vulnerable older adults. Care for vulnerable older Canadians (65 and older) is not optimal in Canada. It does not address the care needs of vulnerable, high risk adults using a holistic approach. Given the growing aging population, and the care needs for seniors with complex conditions is one of the highest in Canada's health care system, there is a sense of urgency to optimize care. Integration of health and social services is an emerging trend in Canada when compared to European countries. There is no common and universal understanding of healthcare and social services integration within the country. Consequently, a clear understanding and definition of integrated health and social services are absent in Canada. Objectives: A study was undertaken to develop a case definition for integrated health and social care initiatives that serve older adults, which was then tested against three Canadian integrated initiatives. Methodology: A limited literature review was undertaken to identify common characteristics of integrated health and social care initiatives that serve older adults, and comprised both scientific and grey literature, in order to develop a case definition. Three Canadian integrated initiatives that are located in the province of Ontario, were identified using an online search and a screening process. They were surveyed to determine if the literature-based integration definition applied to them. Results: The literature showed that there were 24 common healthcare and social services integration characteristics that could be categorized into ten themes: 1) patient-care approach; 2) program goals; 3) measurement; 4) service and care quality; 5) accountability and responsibility; 6) information sharing; 7) Decision-making and problem-solving; 8) culture; 9) leadership; and 10) staff and professional interaction. The three initiatives showed agreement on all the integration characteristics except for those characteristics associated with healthcare and social care professional interaction, collaborative leadership and shared culture. This disagreement may be due to several reasons, including the existing governance divide between the healthcare and social services sectors within the province of Ontario that has created a ripple effect in how professions in the two different sectors interact. In addition, the three initiatives may be at maturing levels of integration, which may explain disagreement on the characteristics associated with leadership and culture. Conclusions: The development of a case definition for healthcare and social services integration that incorporates common integration characteristics can act as a useful instrument in identifying integrated healthcare and social services, particularly given the emerging and evolutionary state of this phenomenon within Canada.Keywords: Canada, case definition, healthcare and social services integration, integration, seniors health, services delivery
Procedia PDF Downloads 155246 Optimal Delivery of Two Similar Products to N Ordered Customers
Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis
Abstract:
The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering products located at a central depot to customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from the depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity of the goods that must be delivered. In the present work, we present a specific capacitated stochastic vehicle routing problem which has realistic applications to distributions of materials to shops or to healthcare facilities or to military units. A vehicle starts its route from a depot loaded with items of two similar but not identical products. We name these products, product 1 and product 2. The vehicle must deliver the products to N customers according to a predefined sequence. This means that first customer 1 must be serviced, then customer 2 must be serviced, then customer 3 must be serviced and so on. The vehicle has a finite capacity and after servicing all customers it returns to the depot. It is assumed that each customer prefers either product 1 or product 2 with known probabilities. The actual preference of each customer becomes known when the vehicle visits the customer. It is also assumed that the quantity that each customer demands is a random variable with known distribution. The actual demand is revealed upon the vehicle’s arrival at customer’s site. The demand of each customer cannot exceed the vehicle capacity and the vehicle is allowed during its route to return to the depot to restock with quantities of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. If there is shortage for the desired product, it is permitted to deliver the other product at a reduced price. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the expected total cost among all possible strategies. It is possible to find the optimal routing strategy using a suitable stochastic dynamic programming algorithm. It is also possible to prove that the optimal routing strategy has a specific threshold-type structure, i.e. it is characterized by critical numbers. This structural result enables us to construct an efficient special-purpose dynamic programming algorithm that operates only over those routing strategies having this structure. The findings of the present study lead us to the conclusion that the dynamic programming method may be a very useful tool for the solution of specific vehicle routing problems. A problem for future research could be the study of a similar stochastic vehicle routing problem in which the vehicle instead of delivering, it collects products from ordered customers.Keywords: collection of similar products, dynamic programming, stochastic demands, stochastic preferences, vehicle routing problem
Procedia PDF Downloads 267245 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification
Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos
Abstract:
Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology
Procedia PDF Downloads 149244 Acceptance and Commitment Therapy for Social Anxiety Disorder in Adolescence: A Manualized Online Approach
Authors: Francisca Alves, Diana Figueiredo, Paula Vagos, Luiza Lima, Maria do Céu Salvador, Daniel Rijo
Abstract:
In recent years, Acceptance and Commitment Therapy (ACT) has been shown to be effective in the treatment of numerous anxiety disorders, including social anxiety disorder (SAD). However, limited evidence exists on its therapeutic gains for adolescents with SAD. The current work presents a weekly 10-session manualized online ACT approach to adolescent SAD, being the first study to do so in a clinical sample of adolescents. The intervention ACT@TeenSAD addresses the six proposed processes of psychological inflexibility (i.e., experiential avoidance, cognitive fusion, lack of values clarity, unworkable action, dominance of the conceptualized past and future, attachment to the conceptualized self) in social situations relevant to adolescents (e.g., doing a presentation). It is organized into four modules. The first module explores the role of psychological (in)flexibility in SAD (session 1 and 2), addressing psychoeducation (i.e., functioning of the mind) according to ACT, the development of an individualized model, and creative hopelessness. The second module focuses on the foundation of psychological flexibility (session 3, 4, and 5), specifically on the development and practice of strategies to promote clarification of values, contact with the present moment, the observing self, defusion, and acceptance. The third module encompasses psychological flexibility in action (sessions 6, 7, 8, and 9), encouraging committed action based on values in social situations relevant to the adolescents. The fourth modules’ focus is the revision of gains and relapse prevention (session 10). This intervention further includes two booster sessions after therapy has ended (3 and 6-month follow-up) that aim to review the continued practice of learned abilities and to plan for their future application to potentially anxious social events. As part of an ongoing clinical trial, the intervention will be assessed on its feasibility with adolescents diagnosed with SAD and on its therapeutic efficacy based on a longitudinal design including pretreatment, posttreatment, 3 and 6-month follow-up. If promising, findings may support the online delivery of ACT interventions for SAD, contributing to increased treatment availability to adolescents. This availability of an effective therapeutic approach will be helpful not only in relation to adolescents who face obstacles (e.g., distance) when attending to face-to-face sessions but also particularly to adolescents with SAD, who are usually more reluctant to look for specialized treatment in public or private health facilities.Keywords: acceptance and commitment therapy, social anxiety disorder, adolescence, manualized online approach
Procedia PDF Downloads 157243 Unravelling the Relationship Between Maternal and Fetal ACE2 Gene Polymorphism and Preeclampsia Risk
Authors: Sonia Tamanna, Akramul Hassan, Mohammad Shakil Mahmood, Farzana Ansari, Gowhar Rashid, Mir Fahim Faisal, M. Zakir Hossain Howlader
Abstract:
Background: Preeclampsia (PE), a pregnancy-specific hypertensive disorder, significantly impacts maternal and fetal health. It is particularly prevalent in underdeveloped countries and is linked to preterm delivery and fetal growth. The renin-angiotensin system (RAS) plays a crucial role in ensuring a successful pregnancy outcome, with Angiotensin-Converting Enzyme 2 (ACE2) being a key component. ACE2 converts ANG II to Ang-(1-7), offering protection against ANG II-induced stress and inflammation while regulating blood pressure and osmotic balance during pregnancy. The reduced maternal plasma angiotensin-converting enzyme 2 (ACE2) seen in preeclampsia might contribute to its pathogenesis. However, there has been a dearth of comprehensive research into the association between ACE2 gene polymorphism and preeclampsia. In the South Asian population, hypertension is strongly linked to two SNPs: rs2285666 and rs879922. This genotype was therefore considered, and the possible association of maternal and fetal ACE2 gene polymorphism with preeclampsia within the Bangladeshi population was evaluated. Method: DNA was extracted from peripheral white blood cells (WBCs) using the organic method, and SNP genotyping was done via PCR-RFLP. Odds ratios (OR) with 95% confidence intervals (95% CI) were calculated using logistic regression to determine relative risk. Result: A comprehensive case-control study was conducted on 51 PE patients and their infants, along with 56 control subjects and their infants. Maternal single nuvleotide polymorphisms (SNP) (rs2285666) analysis revealed a strong association between the TT genotype and preeclampsia, with a four-fold increased risk in mothers (P=0.024, OR=4.00, 95% CI=1.36-11.37) compared to their ancestral genotype CC. However, the CT genotype (rs2285666) showed no significant difference (P=0.46, OR=1.54, 95% CI=0.57-4.14). Notably, no significant correlation was found in infants, regardless of their gender. For rs879922, no significant association was observed in both mothers and infants. This pioneering study suggests that mothers carrying the ACE2 gene variant rs2285666 (TT allele) may be at higher risk for preeclampsia, potentially influencing hypertension characteristics, whereas rs879922 does not appear to be associated with developing preeclampsia. Conclusion: This study sheds light on the role of ACE2 gene polymorphism, particularly the rs2285666 TT allele, in maternal susceptibility to preeclampsia. However, rs879922 does not appear to be linked to the risk of PE. This research contributes to our understanding of the genetic underpinnings of preeclampsia, offering insights into potential avenues for prevention and management.Keywords: ACE2, PCR-RFLP, preeclampsia, single nuvleotide polymorphisms (SNPs)
Procedia PDF Downloads 61242 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization
Authors: Aitor Bilbao, Dragos Axinte, John Billingham
Abstract:
The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation
Procedia PDF Downloads 275241 Urban Waste Management for Health and Well-Being in Lagos, Nigeria
Authors: Bolawole F. Ogunbodede, Mokolade Johnson, Adetunji Adejumo
Abstract:
High population growth rate, reactive infrastructure provision, inability of physical planning to cope with developmental pace are responsible for waste water crisis in the Lagos Metropolis. Septic tank is still the most prevalent waste-water holding system. Unfortunately, there is a dearth of septage treatment infrastructure. Public waste-water treatment system statistics relative to the 23 million people in Lagos State is worrisome. 1.85 billion Cubic meters of wastewater is generated on daily basis and only 5% of the 26 million population is connected to public sewerage system. This is compounded by inadequate budgetary allocation and erratic power supply in the last two decades. This paper explored community participatory waste-water management alternative at Oworonshoki Municipality in Lagos. The study is underpinned by decentralized Waste-water Management systems in built-up areas. The initiative accommodates 5 step waste-water issue including generation, storage, collection, processing and disposal through participatory decision making in two Oworonshoki Community Development Association (CDA) areas. Drone assisted mapping highlighted building footage. Structured interviews and focused group discussion of land lord associations in the CDA areas provided collaborator platform for decision-making. Water stagnation in primary open drainage channels and natural retention ponds in framing wetlands is traceable to frequent of climate change induced tidal influences in recent decades. Rise in water table resulting in septic-tank leakage and water pollution is reported to be responsible for the increase in the water born infirmities documented in primary health centers. This is in addition to unhealthy dumping of solid wastes in the drainage channels. The effect of uncontrolled disposal system renders surface waters and underground water systems unsafe for human and recreational use; destroys biotic life; and poisons the fragile sand barrier-lagoon urban ecosystems. Cluster decentralized system was conceptualized to service 255 households. Stakeholders agreed on public-private partnership initiative for efficient wastewater service delivery.Keywords: health, infrastructure, management, septage, well-being
Procedia PDF Downloads 174240 Strategies for Improving and Sustaining Quality in Higher Education
Authors: Anshu Radha Aggarwal
Abstract:
Higher Education (HE) in the India has experienced a series of remarkable changes over the last fifteen years as successive governments have sought to make the sector more efficient and more accountable for investment of public funds. Rapid expansion in student numbers and pressures to widen Participation amongst non-traditional students are key challenges facing HE. Learning outcomes can act as a benchmark for assuring quality and efficiency in HE and they also enable universities to describe courses in an unambiguous way so as to demystify (and open up) education to a wider audience. This paper examines how learning outcomes are used in HE and evaluates the implications for curriculum design and student learning. There has been huge expansion in the field of higher education, both technical and non-technical, in India during the last two decades, and this trend is continuing. It is expected that another about 400 colleges and 300 universities will be created by the end of the 13th Plan Period. This has lead to many concerns about the quality of education and training of our students. Many studies have brought the issues ailing our curricula, delivery, monitoring and assessment. Govt. of India, (via MHRD, UGC, NBA,…) has initiated several steps to bring improvement in quality of higher education and training, such as National Skills Qualification Framework, making accreditation of institutions mandatory in order to receive Govt. grants, and so on. Moreover, Outcome-based Education and Training (OBET) has also been mandated and encouraged in the teaching/learning institutions. MHRD, UGC and NBAhas made accreditation of schools, colleges and universities mandatory w.e.f Jan 2014. Outcome-based Education and Training (OBET) approach is learner-centric, whereas the traditional approach has been teacher-centric. OBET is a process which involves the re-orientation/restructuring the curriculum, implementation, assessment/measurements of educational goals, and achievement of higher order learning, rather than merely clearing/passing the university examinations. OBET aims to bring about these desired changes within the students, by increasing knowledge, developing skills, influencing attitudes and creating social-connect mind-set. This approach has been adopted by several leading universities and institutions around the world in advanced countries. Objectives of this paper is to highlight the issues concerning quality in higher education and quality frameworks, to deliberate on the various education and training models, to explain the outcome-based education and assessment processes, to provide an understanding of the NAAC and outcome-based accreditation criteria and processes and to share best-practice outcomes-based accreditation system and process.Keywords: learning outcomes, curriculum development, pedagogy, outcome based education
Procedia PDF Downloads 523239 Bis-Azlactone Based Biodegradable Poly(Ester Amide)s: Design, Synthesis and Study
Authors: Kobauri Sophio, Kantaria Tengiz, Tugushi David, Puiggali Jordi, Katsarava Ramaz
Abstract:
Biodegradable biomaterials (BB) are of high interest for numerous applications in modern medicine as resorbable surgical materials and drug delivery systems. This kind of materials can be cleared from the body after the fulfillment of their function that excludes a surgical intervention for their removal. One of the most promising BBare amino acids based biodegradable poly(ester amide)s (PEAs) which are composed of naturally occurring (α-amino acids) and non-toxic building blocks such as fatty diols and dicarboxylic acids. Key bis-nucleophilic monomers for synthesizing the PEAs are diamine-diesters-di-p-toluenesulfonic acid salts of bis-(α-amino acid)-alkylenediesters (TAADs) which form the PEAs after step-growth polymerization (polycondensation) with bis-electrophilic counter-partners - activated diesters of dicarboxylic acids. The PEAs combine all advantages of the 'parent polymers' – polyesters (PEs) and polyamides (PAs): Ability of biodegradation (PEs), a high affinity with tissues and a wide range of desired mechanical properties (PAs). The scopes of applications of thePEAs can substantially be expanded by their functionalization, e.g. through the incorporation of hydrophobic fragments into the polymeric backbones. Hydrophobically modified PEAs can form non-covalent adducts with various compounds that make them attractive as drug carriers. For hydrophobic modification of the PEAs, we selected so-called 'Azlactone Method' based on the application of p-phenylene-bis-oxazolinons (bis-azlactones, BALs) as active bis-electrophilic monomers in step-growth polymerization with TAADs. Interaction of BALs with TAADs resulted in the PEAs with low MWs (Mw2,800-19,600 Da) and poor material properties. The high-molecular-weight PEAs (Mw up to 100,000) with desirable material properties were synthesized after replacement of a part of BALs with activated diester - di-p-nitrophenylsebacate, or a part of TAAD with alkylenediamine – 1,6-hexamethylenediamine. The new hydrophobically modified PEAs were characterized by FTIR, NMR, GPC, and DSC. It was shown that after the hydrophobic modification the PEAs retain the biodegradability (in vitro study catalyzed by α-chymptrypsin and lipase), and are of interest for constructing resorbable surgical and pharmaceutical devices including drug delivering containers such as microspheres. The new PEAs are insoluble in hydrophobic organic solvents such as chloroform or dichloromethane (swell only) that allowed elaborating a new technology of fabricating microspheres.Keywords: amino acids, biodegradable polymers, bis-azlactones, microspheres
Procedia PDF Downloads 175238 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis
Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain
Abstract:
Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management
Procedia PDF Downloads 204237 Development of Agomelatine Loaded Proliposomal Powders for Improved Intestinal Permeation: Effect of Surface Charge
Authors: Rajasekhar Reddy Poonuru, Anusha Parnem
Abstract:
Purpose: To formulate proliposome powder of agomelatine, an antipsychotic drug, and to evaluate physicochemical, in vitro characters and effect of surface charge on ex vivo intestinal permeation. Methods: Film deposition technique was employed to develop proliposomal powders of agomelatin with varying molar ratios of lipid Hydro Soy PC L-α-phosphatidylcholine (HSPC) and cholesterol with fixed sum of drug. With the aim to derive free flowing and stable proliposome powder, fluid retention potential of various carriers was examined. Liposome formation and number of vesicles formed for per mm3 up on hydration, vesicle size, and entrapment efficiency was assessed to deduce an optimized formulation. Sodium cholate added to optimized formulation to induce surface charge on formed vesicles. Solid-state characterization (FTIR, DSC, and XRD) was performed with the intention to assess native crystalline and chemical behavior of drug. The in vitro dissolution test of optimized formulation along with pure drug was evaluated to estimate dissolution efficiency (DE) and relative dissolution rate (RDR). Effective permeability co-efficient (Peff(rat)) in rat and enhancement ratio (ER) of drug from formulation and pure drug dispersion were calculated from ex vivo permeation studies in rat ileum. Results: Proliposomal powder formulated with equimolar ratio of HSPC and cholesterol ensued in higher no. of vesicles (3.95) with 90% drug entrapment up on hydration. Neusilin UFL2 was elected as carrier because of its high fluid retention potential (4.5) and good flow properties. Proliposome powder exhibited augmentation in DE (60.3 ±3.34) and RDR (21.2±01.02) of agomelation over pure drug. Solid state characterization studies demonstrated the transformation of native crystalline form of drug to amorphous and/or molecular state, which was in correlation with results obtained from in vitro dissolution test. The elevated Peff(rat) of 46.5×10-4 cm/sec and ER of 2.65 of drug from charge induced proliposome formulation with respect to pure drug dispersion was assessed from ex vivo intestinal permeation studies executed in ileum of wistar rats. Conclusion: Improved physicochemical characters and ex vivo intestinal permeation of drug from charge induced proliposome powder with Neusilin UFL2 unravels the potentiality of this system in enhancing oral delivery of agomelatin.Keywords: agomelatin, proliposome, sodium cholate, neusilin
Procedia PDF Downloads 136236 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation
Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy
Abstract:
The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis
Procedia PDF Downloads 406235 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle
Authors: Aloke Bapli, Debabrata Seth
Abstract:
Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation
Procedia PDF Downloads 155234 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data
Authors: M. Mueller, M. Kuehn, M. Voelker
Abstract:
In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing
Procedia PDF Downloads 131233 Stability and Rheology of Sodium Diclofenac-Loaded and Unloaded Palm Kernel Oil Esters Nanoemulsion Systems
Authors: Malahat Rezaee, Mahiran Basri, Raja Noor Zaliha Raja Abdul Rahman, Abu Bakar Salleh
Abstract:
Sodium diclofenac is one of the most commonly used drugs of nonsteroidal anti-inflammatory drugs (NSAIDs). It is especially effective in the controlling the severe conditions of inflammation and pain, musculoskeletal disorders, arthritis, and dysmenorrhea. Formulation as nanoemulsions is one of the nanoscience approaches that have been progressively considered in pharmaceutical science for transdermal delivery of drug. Nanoemulsions are a type of emulsion with particle sizes ranging from 20 nm to 200 nm. An emulsion is formed by the dispersion of one liquid, usually the oil phase in another immiscible liquid, water phase that is stabilized using surfactant. Palm kernel oil esters (PKOEs), in comparison to other oils; contain higher amounts of shorter chain esters, which suitable to be applied in micro and nanoemulsion systems as a carrier for actives, with excellent wetting behavior without the oily feeling. This research was aimed to study the effect of O/S ratio on stability and rheological behavior of sodium diclofenac loaded and unloaded palm kernel oil esters nanoemulsion systems. The effect of different O/S ratio of 0.25, 0.50, 0.75, 1.00 and 1.25 on stability of the drug-loaded and unloaded nanoemulsion formulations was evaluated by centrifugation, freeze-thaw cycle and storage stability tests. Lecithin and cremophor EL were used as surfactant. The stability of the prepared nanoemulsion formulations was assessed based on the change in zeta potential and droplet size as a function of time. Instability mechanisms including coalescence and Ostwald ripening for the nanoemulsion system were discussed. In comparison between drug-loaded and unloaded nanoemulsion formulations, drug-loaded formulations represented smaller particle size and higher stability. In addition, the O/S ratio of 0.5 was found to be the best ratio of oil and surfactant for production of a nanoemulsion with the highest stability. The effect of O/S ratio on rheological properties of drug-loaded and unloaded nanoemulsion systems was studied by plotting the flow curves of shear stress (τ) and viscosity (η) as a function of shear rate (γ). The data were fitted to the Power Law model. The results showed that all nanoemulsion formulations exhibited non-Newtonian flow behaviour by displaying shear thinning behaviour. Viscosity and yield stress were also evaluated. The nanoemulsion formulation with the O/S ratio of 0.5 represented higher viscosity and K values. In addition, the sodium diclofenac loaded formulations had more viscosity and higher yield stress than drug-unloaded formulations.Keywords: nanoemulsions, palm kernel oil esters, sodium diclofenac, rheoligy, stability
Procedia PDF Downloads 423232 A Paradigm Shift into the Primary Teacher Education Program in Bangladesh
Authors: Happy Kumar Das, Md. Shahriar Shafiq
Abstract:
This paper portrays an assumed change in the primary teacher education program in Bangladesh. An initiative has been taken with a vision to ensure an integrated approach to developing trainee teachers’ knowledge and understanding about learning at a deeper level, and with that aim, the Diploma in Primary Education (DPEd) program replaces the Certificate-in-Education (C-in-Ed) program in Bangladeshi context for primary teachers. The stated professional values of the existing program such as ‘learner-centered’, ‘reflective’ approach to pedagogy tend to contradict the practice exemplified through the delivery mechanism. To address the challenges, through the main two components (i) Training Institute-based learning and (ii) School-based learning, the new program tends to cover knowledge and value that underpin the actual practice of teaching. These two components are given approximately equal weighting within the program in terms of both time, content and assessment as the integration seeks to combine theoretical knowledge with practical knowledge and vice versa. The curriculum emphasizes a balance between the taught modules and the components of the practicum. For example, the theories of formative and summative assessment techniques are elaborated through focused reflection on case studies as well as observation and teaching practice in the classroom. The key ideology that is reflected through this newly developed program is teacher’s belief in ‘holistic education’ that can lead to creating opportunities for skills development in all three (Cognitive, Social and Affective) domains simultaneously. The proposed teacher education program aims to address these areas of generic skill development alongside subject-specific learning outcomes. An exploratory study has been designed in this regard where 7 Primary Teachers’ Training Institutes (PTIs) in 7 divisions of Bangladesh was used for experimenting DPEd program. The analysis was done based on document analysis, periodical monitoring report and empirical data gathered from the experimental PTIs. The findings of the study revealed that the intervention brought positive change in teachers’ professional beliefs, attitude and skills along with improvement of school environment. Teachers in training schools work together for collective professional development where they support each other through lesson study, action research, reflective journals, group sharing and so on. Although the DPEd program addresses the above mentioned factors, one of the challenges of the proposed program is the issue of existing capacity and capabilities of the PTIs towards its effective implementation.Keywords: Bangladesh, effective implementation, primary teacher education, reflective approach
Procedia PDF Downloads 217231 Lessons from Patients Expired due to Severe Head Injuries Treated in Intensive Care Unit of Lady Reading Hospital Peshawar
Authors: Mumtaz Ali, Hamzullah Khan, Khalid Khanzada, Shahid Ayub, Aurangzeb Wazir
Abstract:
Objective: To analyse the death of patients treated in neuro-surgical ICU for severe head injuries from different perspectives. The evaluation of the data so obtained to help improve the health care delivery to this group of patients in ICU. Study Design: It is a descriptive study based on retrospective analysis of patients presenting to neuro-surgical ICU in Lady Reading Hospital, Peshawar. Study Duration: It covered the period between 1st January 2009 to 31st December 2009. Material and Methods: The Clinical record of all the patients presenting with the clinical radiological and surgical features of severe head injuries, who expired in neuro-surgical ICU was collected. A separate proforma which mentioned age, sex, time of arrival and death, causes of head injuries, the radiological features, the clinical parameters, the surgical and non surgical treatment given was used. The average duration of stay and the demographic and domiciliary representation of these patients was noted. The record was analyzed accordingly for discussion and recommendations. Results: Out of the total 112 (n-112) patients who expired in one year in the neuro-surgical ICU the young adults made up the majority 64 (57.14%) followed by children, 34 (30.35%) and then the elderly age group: 10 (8.92%). Road traffic accidents were the major cause of presentation, 75 (66.96%) followed by history of fall; 23 (20.53%) and then the fire arm injuries; 13 (11.60%). The predominant CT scan features of these patients on presentation was cerebral edema, and midline shift (diffuse neuronal injuries). 46 (41.07%) followed by cerebral contusions. 28 (25%). The correctable surgical causes were present only in 18 patients (16.07%) and the majority 94 (83.92%) were given conservative management. Of the 69 (n=69) patients in which CT scan was repeated; 62 (89.85%) showed worsening of the initial CT scan abnormalities while in 7 cases (10.14%) the features were static. Among the non surgical cases both ventilatory therapy in 7 (6.25%) and tracheostomy in 39 (34.82%) failed to change the outcome. The maximum stay in the neuro ICU leading upto the death was 48 hours in 35 (31.25%) cases followed by 31 (27.67%) cases in 24 hours; 24 (21.42%) in one week and 16 (14.28%) in 72 hours. Only 6 (5.35%) patients survived more than a week. Patients were received from almost all the districts of NWFP except. The Hazara division. There were some Afghan refugees as well. Conclusion: Mortality following the head injuries is alarmingly high despite repeated claims about the professional and administrative improvement. Even places like ICU could not change the out come according to the desired aims and objectives in the present set up. A rethinking is needed both at the individual and institutional level among the concerned quarters with a clear aim at the more scientific grounds. Only then one can achieve the desired results.Keywords: Glasgow Coma Scale, pediatrics, geriatrics, Peshawar
Procedia PDF Downloads 350230 Using Repetition of Instructions in Course Design to Improve Instructor Efficiency and Increase Enrollment in a Large Online Course
Authors: David M. Gilstrap
Abstract:
Designing effective instructions is a critical dimension of effective teaching systems. Due to a void in interpersonal contact, online courses present new challenges in this regard, especially with large class sizes. This presentation is a case study in how the repetition of instructions within the course design was utilized to increase instructor efficiency in managing a rapid rise in enrollment. World of Turf is a two-credit, semester-long elective course for non-turfgrass majors at Michigan State University. It is taught entirely online and solely by the instructor without any graduate teaching assistants. Discussion forums about subject matter are designated for each lecture, and those forums are moderated by a few undergraduate turfgrass majors. The instructions as to the course structure, navigation, and grading are conveyed in the syllabus and course-introduction lecture. Regardless, students email questions about such matters, and the number of emails increased as course enrollment grew steadily during the first three years of its existence, almost to a point that the course was becoming unmanageable. Many of these emails occurred because the instructor was failing to update and operate the course in a timely and proper fashion because he was too busy answering emails. Some of the emails did help the instructor ferret out poorly composed instructions, which he corrected. Beginning in the summer semester of 2015, the instructor overhauled the course by segregating content into weekly modules. The philosophy envisioned and embraced was that there can never be too much repetition of instructions in an online course. Instructions were duplicated within each of these modules as well as associated modules for syllabus and schedules, getting started, frequently asked questions, practice tests, surveys, and exams. In addition, informational forums were created and set aside for questions about the course workings and each of the three exams, thus creating even more repetition. Within these informational forums, students typically answer each other’s questions, which demonstrated to the students that that information is available in the course. When needed, the instructor interjects with corrects answers or clarifies any misinformation which students might be putting forth. Increasing the amount of repetition of instructions and strategic enhancements to the course design have resulted in a dramatic decrease in the number of email replies necessitated by the instructor. The resulting improvement in efficiency allowed the instructor to raise enrollment limits thus effecting a ten-fold increase in enrollment over a five-year period with 1050 students registered during the most recent academic year, thus becoming easily the largest online course at the university. Because of the improvement in course-delivery efficiency, sufficient time was created that allowed the instructor to development and launch an additional online course, hence further enhancing his productivity and value in terms of the number of the student-credit hours for which he is responsible.Keywords: design, efficiency, instructions, online, repetition
Procedia PDF Downloads 209229 Controlling the Release of Cyt C and L- Dopa from pNIPAM-AAc Nanogel Based Systems
Authors: Sulalit Bandyopadhyay, Muhammad Awais Ashfaq Alvi, Anuvansh Sharma, Wilhelm R. Glomm
Abstract:
Release of drugs from nanogels and nanogel-based systems can occur under the influence of external stimuli like temperature, pH, magnetic fields and so on. pNIPAm-AAc nanogels respond to the combined action of both temperature and pH, the former being mostly determined by hydrophilic-to-hydrophobic transitions above the volume phase transition temperature (VPTT), while the latter is controlled by the degree of protonation of the carboxylic acid groups. These nanogels based systems are promising candidates in the field of drug delivery. Combining nanogels with magneto-plasmonic nanoparticles (NPs) introduce imaging and targeting modalities along with stimuli-response in one hybrid system, thereby incorporating multifunctionality. Fe@Au core-shell NPs possess optical signature in the visible spectrum owing to localized surface plasmon resonance (LSPR) of the Au shell, and superparamagnetic properties stemming from the Fe core. Although there exist several synthesis methods to control the size and physico-chemical properties of pNIPAm-AAc nanogels, yet, there is no comprehensive study that highlights the dependence of incorporation of one or more layers of NPs to these nanogels. In addition, effective determination of volume phase transition temperature (VPTT) of the nanogels is a challenge which complicates their uses in biological applications. Here, we have modified the swelling-collapse properties of pNIPAm-AAc nanogels, by combining with Fe@Au NPs using different solution based methods. The hydrophilic-hydrophobic transition of the nanogels above the VPTT has been confirmed to be reversible. Further, an analytical method has been developed to deduce the average VPTT which is found to be 37.3°C for the nanogels and 39.3°C for nanogel coated Fe@Au NPs. An opposite swelling –collapse behaviour is observed for the latter where the Fe@Au NPs act as bridge molecules pulling together the gelling units. Thereafter, Cyt C, a model protein drug and L-Dopa, a drug used in the clinical treatment of Parkinson’s disease were loaded separately into the nanogels and nanogel coated Fe@Au NPs, using a modified breathing-in mechanism. This gave high loading and encapsulation efficiencies (L Dopa: ~9% and 70µg/mg of nanogels, Cyt C: ~30% and 10µg/mg of nanogels respectively for both the drugs. The release kinetics of L-Dopa, monitored using UV-vis spectrophotometry was observed to be rather slow (over several hours) with highest release happening under a combination of high temperature (above VPTT) and acidic conditions. However, the release of L-Dopa from nanogel coated Fe@Au NPs was the fastest, accounting for release of almost 87% of the initially loaded drug in ~30 hours. The chemical structure of the drug, drug incorporation method, location of the drug and presence of Fe@Au NPs largely alter the drug release mechanism and the kinetics of these nanogels and Fe@Au NPs coated with nanogels.Keywords: controlled release, nanogels, volume phase transition temperature, l-dopa
Procedia PDF Downloads 331228 AAV-Mediated Human Α-Synuclein Expression in a Rat Model of Parkinson's Disease –Further Characterization of PD Phenotype, Fine Motor Functional Effects as Well as Neurochemical and Neuropathological Changes over Time
Authors: R. Pussinen, V. Jankovic, U. Herzberg, M. Cerrada-Gimenez, T. Huhtala, A. Nurmi, T. Ahtoniemi
Abstract:
Targeted over-expression of human α-synuclein using viral-vector mediated gene delivery into the substantia nigra of rats and non-human primates has been reported to lead to dopaminergic cell loss and the formation of α-synuclein aggregates reminiscent of Lewy bodies. We have previously shown how AAV-mediated expression of α-synuclein is seen in the chronic phenotype of the rats over 16 week follow-up period. In the context of these findings, we attempted to further characterize this long term PD related functional and motor deficits as well as neurochemical and neuropathological changes in AAV-mediated α-synuclein transfection model in rats during chronic follow-up period. Different titers of recombinant AAV expressing human α-synuclein (A53T) were stereotaxically injected unilaterally into substantia nigra of Wistar rats. Rats were allowed to recover for 3 weeks prior to initial baseline behavioral testing with rotational asymmetry test, stepping test and cylinder test. A similar behavioral test battery was applied again at weeks 5, 9,12 and 15. In addition to traditionally used rat PD model tests, MotoRater test system, a high speed kinematic gait performance monitoring was applied during the follow-up period. Evaluation focused on animal gait between groups. Tremor analysis was performed on weeks 9, 12 and 15. In addition to behavioral end-points, neurochemical evaluation of dopamine and its metabolites were evaluated in striatum. Furthermore, integrity of the dopamine active transport (DAT) system was evaluated by using 123I- β-CIT and SPECT/CT imaging on weeks 3, 8 and 12 after AAV- α-synuclein transfection. Histopathology was examined from end-point samples at 3 or 12 weeks after AAV- α-synuclein transfection to evaluate dopaminergic cell viability and microglial (Iba-1) activation status in substantia nigra by using stereological analysis techniques. This study focused on the characterization and validation of previously published AAV- α-synuclein transfection model in rats but with the addition of novel end-points. We present the long term phenotype of AAV- α-synuclein transfected rats with traditionally used behavioral tests but also by using novel fine motor analysis techniques and tremor analysis which provide new insight to unilateral effects of AAV α-synuclein transfection. We also present data about neurochemical and neuropathological end-points for the dopaminergic system in the model and how well they correlate with behavioral phenotype.Keywords: adeno-associated virus, alphasynuclein, animal model, Parkinson’s disease
Procedia PDF Downloads 295227 Development and Characterization of Novel Topical Formulation Containing Niacinamide
Authors: Sevdenur Onger, Ali Asram Sagiroglu
Abstract:
Hyperpigmentation is a cosmetically unappealing skin problem caused by an overabundance of melanin in the skin. Its pathophysiology is caused by melanocytes being exposed to paracrine melanogenic stimuli, which can upregulate melanogenesis-related enzymes (such as tyrosinase) and cause melanosome formation. Tyrosinase is linked to the development of melanosomes biochemically, and it is the main target of hyperpigmentation treatment. therefore, decreasing tyrosinase activity to reduce melanosomes has become the main target of hyperpigmentation treatment. Niacinamide (NA) is a natural chemical found in a variety of plants that is used as a skin-whitening ingredient in cosmetic formulations. NA decreases melanogenesis in the skin by inhibiting melanosome transfer from melanocytes to covering keratinocytes. Furthermore, NA protects the skin from reactive oxygen species and acts as a main barrier with the skin, reducing moisture loss by increasing ceramide and fatty acid synthesis. However, it is very difficult for hydrophilic compounds such as NA to penetrate deep into the skin. Furthermore, because of the nicotinic acid in NA, it is an irritant. As a result, we've concentrated on strategies to increase NA skin permeability while avoiding its irritating impacts. Since nanotechnology can affect drug penetration behavior by controlling the release and increasing the period of permanence on the skin, it can be a useful technique in the development of whitening formulations. Liposomes have become increasingly popular in the cosmetics industry in recent years due to benefits such as their lack of toxicity, high penetration ability in living skin layers, ability to increase skin moisture by forming a thin layer on the skin surface, and suitability for large-scale production. Therefore, liposomes containing NA were developed for this study. Different formulations were prepared by varying the amount of phospholipid and cholesterol and examined in terms of particle sizes, polydispersity index (PDI) and pH values. The pH values of the produced formulations were determined to be suitable with the pH value of the skin. Particle sizes were determined to be smaller than 250 nm and the particles were found to be of homogeneous size in the formulation (pdi<0.30). Despite the important advantages of liposomal systems, they have low viscosity and stability for topical use. For these reasons, in this study, liposomal cream formulations have been prepared for easy topical application of liposomal systems. As a result, liposomal cream formulations containing NA have been successfully prepared and characterized. Following the in-vitro release and ex-vivo diffusion studies to be conducted in the continuation of the study, it is planned to test the formulation that gives the most appropriate result on the volunteers after obtaining the approval of the ethics committee.Keywords: delivery systems, hyperpigmentation, liposome, niacinamide
Procedia PDF Downloads 112226 Infection Control Drill: To Assess the Readiness and Preparedness of Staffs in Managing Suspected Ebola Patients in Tan Tock Seng Hospital Emergency Department
Authors: Le Jiang, Chua Jinxing
Abstract:
Introduction: The recent outbreak of Ebola virus disease in the west Africa has drawn global concern. With a high fatality rate and direct human-to-human transmission, it has spread between countries and caused great damages for patients and family who are affected. Being the designated hospital to manage epidemic outbreak in Singapore, Tan Tock Seng Hospital (TTSH) is facing great challenges in preparation and managing of potential outbreak of emerging infectious disease such as Ebola virus disease. Aim: We conducted an infection control drill in TTSH emergency department to assess the readiness of healthcare and allied health workers in managing suspected Ebola patients. It also helps to review current Ebola clinical protocol and work instruction to ensure more smooth and safe practice in managing Ebola patients in TTSH emergency department. Result: General preparedness level of staffs involved in managing Ebola virus disease in TTSH emergency department is not adequate. Knowledge deficits of staffs on Ebola personal protective equipment gowning and degowning process increase the risk of potential cross contamination in patient care. Loopholes are also found in current clinical protocol, such as unclear instructions and inaccurate information, which need to be revised to promote better staff performance in patient management. Logistic issues such as equipment dysfunction and inadequate supplies can lead to ineffective communication among teams and causing harm to patients in emergency situation. Conclusion: The infection control drill identified the need for more well-structured and clear clinical protocols to be in place to promote participants performance. In addition to quality protocols and guidelines, systemic training and annual refresher for all staffs in the emergency department are essential to prepare staffs for the outbreak of Ebola virus disease. Collaboration and communication with allied health staffs are also crucial for smooth delivery of patient care and minimising the potential human suffering, properties loss or injuries caused by disease. Therefore, more clinical drills with collaboration among various departments involved are recommended to be conducted in the future to monitor and assess readiness of TTSH emergency department in managing Ebola virus disease.Keywords: ebola, emergency department, infection control drill, Tan Tock Seng Hospital
Procedia PDF Downloads 121225 Pregnancy Rate and Outcomes after Uterine Fibroid Embolization Single Centre Experience in the Middle East from the United Arab Emirates at Alain Hospital
Authors: Jamal Alkoteesh, Mohammed Zeki, Mouza Alnaqbi
Abstract:
Objective: To evaluate pregnancy outcomes, complications and neonatal outcomes in women who had previously undergone uterine arterial embolization. Design: Retrospective study. In this study, most women opted for UFE as a fertility treatment after failure of myomectomy or in vitro fertilization, or because hysterectomy was the only suggested option. Background. Myomectomy is the standard approach in patients with fibroids desiring a future pregnancy. However, myomectomy may be difficult in cases of numerous interstitial and/or submucous fibroids.In these cases, UFE has the advantage of embolizing all fibroids in one procedure. This procedure is an accepted nonsurgical treatment for symptomatic uterine fibroids. Study Methods: A retrospective study of 210 patients treated with UFE for symptomatic uterine fibroids between 2011-2016 was performed. UFE was performed using ((PVA; Embozen, Beadblock) (500-900 µm in diameter). Pregnancies were identified using screening questionnaires and the study database. Of the 210 patients who received UFE treatment, 35 women younger than the age of 40 wanted to conceive and had been unable. All women in our study were advised to wait six months or more after UFE before attempting to become pregnant, of which the reported time range before attempting to conceive was seven to 33 months (average 20 months). RESULTS: In a retrospective chart review of patients younger than the age of 40 (35 patients,18 patients reported 23 pregnancies, of which five were miscarriages. Two more pregnancies were complicated by premature labor. Of the 23 pregnancies, 16 were normal full-term pregnancies, 15 women had conceived once, and four had become pregnant twice. The remaining patients did not conceive. In the study, there was no reported intrauterine growth retardation in the prenatal period, fetal distress during labor, or problems related to uterine integrity. Two patients reported minor problems during pregnancy that were borderline oligohydramnios and low-lying placenta. In the cohort of women who did conceive, overall, 16 out of 18 births proceeded normally without any complications (86%). Eight women delivered by cesarean section, and 10 women had normal vaginal delivery. In this study of 210 women, UFE had a fertility rate of 47%. Our group of 23 pregnancies was small, but did confirm successful pregnancy after UFE. The 45.7% pregnancy rate in women below the age of 40 years old who completed a term pregnancy compares favorably with women who underwent myomectomy via other method. Of the women in the cohort who did conceive, subsequent birth proceeded normally (86%). Conclusion: Pregnancy after UFE is well-documented. The risks of infertility following embolization, premature menopause, and hysterectomy are small, as is the radiation exposure during embolization. Fertility rates appear similar to patients undergoing myomectomy.UFE should not be contraindicated in patients who want to conceive and they should be able to choose between surgical options and UFE.Keywords: fibroid, pregnancy, therapeutic embolization, uterine artery
Procedia PDF Downloads 228224 An Assessment of Digital Platforms, Student Online Learning, Teaching Pedagogies, Research and Training at Kenya College of Accounting University
Authors: Jasmine Renner, Alice Njuguna
Abstract:
The booming technological revolution is driving a change in the mode of delivery systems especially for e-learning and distance learning in higher education. The report and findings of the study; an assessment of digital platforms, student online learning, teaching pedagogies, research and training at Kenya College of Accounting University (hereinafter 'KCA') was undertaken as a joint collaboration project between the Carnegie African Diaspora Fellowship and input from the staff, students and faculty at KCA University. The participants in this assessment/research met for selected days during a six-week period during which, one-one consultations, surveys, questionnaires, foci groups, training, and seminars were conducted to ascertain 'online learning and teaching, curriculum development, research and training at KCA.' The project was organized into an eight-week project workflow with each week culminating in project activities designed to assess digital online teaching and learning at KCA. The project also included the training of distance learning instructors at KCA and the evaluation of KCA’s distance platforms and programs. Additionally, through a curriculum audit and redesign, the project sought to enhance the curriculum development activities related to of distance learning at KCA. The findings of this assessment/research represent the systematic deliberate process of gathering, analyzing and using data collected from DL students, DL staff and lecturers and a librarian personnel in charge of online learning resources and access at KCA. We engaged in one-on-one interviews and discussions with staff, students, and faculty and collated the findings to inform practices that are effective in the ongoing design and development of eLearning earning at KCA University. Overall findings of the project led to the following recommendations. First, there is a need to address infrastructural challenges that led to poor internet connectivity for online learning, training needs and content development for faculty and staff. Second, there is a need to manage cultural impediments within KCA; for example fears of vital change from one platform to another for effectiveness and Institutional goodwill as a vital promise of effective online learning. Third, at a practical and short-term level, the following recommendations based on systematic findings of the research conducted were as follows: there is a need for the following to be adopted at KCA University to promote the effective adoption of online learning: a) an eLearning compatible faculty lab, b) revision of policy to include an eLearn strategy or strategic management, c) faculty and staff recognitions engaged in the process of training for the adoption and implementation of eLearning and d) adequate website resources on eLearning. The report and findings represent a comprehensive approach to a systematic assessment of online teaching and learning, research and training at KCA.Keywords: e-learning, digital platforms, student online learning, online teaching pedagogies
Procedia PDF Downloads 191