Search results for: multiple subordinated modeling
1721 Big Data Analytics and Public Policy: A Study in Rural India
Authors: Vasantha Gouri Prathapagiri
Abstract:
Innovations in ICT sector facilitate qualitative life style for citizens across the globe. Countries that facilitate usage of new techniques in ICT, i.e., big data analytics find it easier to fulfil the needs of their citizens. Big data is characterised by its volume, variety, and speed. Analytics involves its processing in a cost effective way in order to draw conclusion for their useful application. Big data also involves into the field of machine learning, artificial intelligence all leading to accuracy in data presentation useful for public policy making. Hence using data analytics in public policy making is a proper way to march towards all round development of any country. The data driven insights can help the government to take important strategic decisions with regard to socio-economic development of her country. Developed nations like UK and USA are already far ahead on the path of digitization with the support of Big Data analytics. India is a huge country and is currently on the path of massive digitization being realised through Digital India Mission. Internet connection per household is on the rise every year. This transforms into a massive data set that has the potential to improvise the public services delivery system into an effective service mechanism for Indian citizens. In fact, when compared to developed nations, this capacity is being underutilized in India. This is particularly true for administrative system in rural areas. The present paper focuses on the need for big data analytics adaptation in Indian rural administration and its contribution towards development of the country on a faster pace. Results of the research focussed on the need for increasing awareness and serious capacity building of the government personnel working for rural development with regard to big data analytics and its utility for development of the country. Multiple public policies are framed and implemented for rural development yet the results are not as effective as they should be. Big data has a major role to play in this context as can assist in improving both policy making and implementation aiming at all round development of the country.Keywords: Digital India Mission, public service delivery system, public policy, Indian administration
Procedia PDF Downloads 1601720 Using Stable Isotopes and Hydrochemical Characteristics to Assess Stream Water Sources and Flow Paths: A Case Study of the Jonkershoek Catchment, South Africa
Authors: Retang A. Mokua, Julia Glenday, Jacobus M. Nel
Abstract:
Understanding hydrological processes in mountain headwater catchments, such as the Jonkershoek Valley, is crucial for improving the predictive capability of hydrologic modeling in the Cape Fold Mountain region of South Africa, incorporating the influence of the Table Mountain Group fractured rock aquifers. Determining the contributions of various possible surface and subsurface flow pathways in such catchments has been a challenge due to the complex nature of the fractured rock geology, low ionic concentrations, high rainfall, and streamflow variability. The study aimed to describe the mechanisms of streamflow generation during two seasons (dry and wet). In this study, stable isotopes of water (18O and 2H), hydrochemical tracer electrical conductivity (EC), hydrometric data were used to assess the spatial and temporal variation in flow pathways and geographic sources of stream water. Stream water, groundwater, two shallow piezometers, and spring samples were routinely sampled at two adjacent headwater sub-catchments and analyzed for isotopic ratios during baseflow conditions between January 2018 and January 2019. From these results, no significance (p > 0.05) in seasonal variations in isotopic ratios were observed, the stream isotope signatures were consistent throughout the study period. However, significant seasonal and spatial variations in the EC were evident (p < 0.05). The findings suggest that, in the dry season, baseflow generation mechanisms driven by groundwater and interflow as discharge from perennial springs in these catchments are the primary contributors. The wet season flows were attributed to interflow and perennial and ephemeral springs. Furthermore, the observed seasonal variations in EC were indicative of a greater proportion of sub-surface water inputs. With these results, a conceptual model of streamflow generation processes for the two seasons was constructed.Keywords: electrical conductivity, Jonkershoek valley, stable isotopes, table mountain group
Procedia PDF Downloads 1101719 Statistical Modeling of Constituents in Ash Evolved From Pulverized Coal Combustion
Authors: Esam Jassim
Abstract:
Industries using conventional fossil fuels have an interest in better understanding the mechanism of particulate formation during combustion since such is responsible for emission of undesired inorganic elements that directly impact the atmospheric pollution level. Fine and ultrafine particulates have tendency to escape the flue gas cleaning devices to the atmosphere. They also preferentially collect on surfaces in power systems resulting in ascending in corrosion inclination, descending in the heat transfer thermal unit, and severe impact on human health. This adverseness manifests particularly in the regions of world where coal is the dominated source of energy for consumption. This study highlights the behavior of calcium transformation as mineral grains verses organically associated inorganic components during pulverized coal combustion. The influence of existing type of calcium on the coarse, fine and ultrafine mode formation mechanisms is also presented. The impact of two sub-bituminous coals on particle size and calcium composition evolution during combustion is to be assessed. Three mixed blends named Blends 1, 2, and 3 are selected according to the ration of coal A to coal B by weight. Calcium percentage in original coal increases as going from Blend 1 to 3. A mathematical model and a new approach of describing constituent distribution are proposed. Analysis of experiments of calcium distribution in ash is also modeled using Poisson distribution. A novel parameter, called elemental index λ, is introduced as a measuring factor of element distribution. Results show that calcium in ash that originally in coal as mineral grains has index of 17, whereas organically associated calcium transformed to fly ash shown to be best described when elemental index λ is 7. As an alkaline-earth element, calcium is considered the fundamental element responsible for boiler deficiency since it is the major player in the mechanism of ash slagging process. The mechanism of particle size distribution and mineral species of ash particles are presented using CCSEM and size-segregated ash characteristics. Conclusions are drawn from the analysis of pulverized coal ash generated from a utility-scale boiler.Keywords: coal combustion, inorganic element, calcium evolution, fluid dynamics
Procedia PDF Downloads 3371718 Current Drainage Attack Correction via Adjusting the Attacking Saw-Function Asymmetry
Authors: Yuri Boiko, Iluju Kiringa, Tet Yeap
Abstract:
Current drainage attack suggested previously is further studied in regular settings of closed-loop controlled Brushless DC (BLDC) motor with Kalman filter in the feedback loop. Modeling and simulation experiments are conducted in a Matlab environment, implementing the closed-loop control model of BLDC motor operation in position sensorless mode under Kalman filter drive. The current increase in the motor windings is caused by the controller (p-controller in our case) affected by false data injection of substitution of the angular velocity estimates with distorted values. Operation of multiplication to distortion coefficient, values of which are taken from the distortion function synchronized in its periodicity with the rotor’s position change. A saw function with a triangular tooth shape is studied herewith for the purpose of carrying out the bias injection with current drainage consequences. The specific focus here is on how the asymmetry of the tooth in the saw function affects the flow of current drainage. The purpose is two-fold: (i) to produce and collect the signature of an asymmetric saw in the attack for further pattern recognition process, and (ii) to determine conditions of improving stealthiness of such attack via regulating asymmetry in saw function used. It is found that modification of the symmetry in the saw tooth affects the periodicity of current drainage modulation. Specifically, the modulation frequency of the drained current for a fully asymmetric tooth shape coincides with the saw function modulation frequency itself. Increasing the symmetry parameter for the triangle tooth shape leads to an increase in the modulation frequency for the drained current. Moreover, such frequency reaches the switching frequency of the motor windings for fully symmetric triangular shapes, thus becoming undetectable and improving the stealthiness of the attack. Therefore, the collected signatures of the attack can serve for attack parameter identification via the pattern recognition route.Keywords: bias injection attack, Kalman filter, BLDC motor, control system, closed loop, P-controller, PID-controller, current drainage, saw-function, asymmetry
Procedia PDF Downloads 811717 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients
Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi
Abstract:
Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.Keywords: frailty model, latent variables, liver cirrhosis, parametric distribution
Procedia PDF Downloads 2611716 Community Development and Empowerment
Authors: Shahin Marjan Nanaje
Abstract:
The present century is the time that social worker faced complicated issues in the area of their work. All the focus are on bringing change in the life of those that they live in margin or live in poverty became the cause that we have forgotten to look at ourselves and start to bring change in the way we address issues. It seems that there is new area of needs that social worker should response to that. In need of dialogue and collaboration, to address the issues and needs of community both individually and as a group we need to have new method of dialogue as tools to reach to collaboration. The social worker as link between community, organization and government play multiple roles. They need to focus in the area of communication with new ability, to transfer all the narration of the community to those organization and government and vice versa. It is not relate only in language but it is about changing dialogue. Migration for survival by job seeker to the big cities created its own issues and difficulty and therefore created new need. Collaboration is not only requiring between government sector and non-government sectors but also it could be in new way between government, non-government and communities. To reach to this collaboration we need healthy, productive and meaningful dialogue. In this new collaboration there will not be any hierarchy between members. The methodology that selected by researcher were focusing on observation at the first place, and used questionnaire in the second place. Duration of the research was three months and included home visits, group discussion and using communal narrations which helped to bring enough evidence to understand real need of community. The sample selected randomly was included 70 immigrant families which work as sweepers in the slum community in Bangalore, Karnataka. The result reveals that there is a gap between what a community is and what organizations, government and members of society apart from this community think about them. Consequently, it is learnt that to supply any service or bring any change to slum community, we need to apply new skill to have dialogue and understand each other before providing any services. Also to bring change in the life of those marginal groups at large we need to have collaboration as their challenges are collective and need to address by different group and collaboration will be necessary. The outcome of research helped researcher to see the area of need for new method of dialogue and collaboration as well as a framework for collaboration and dialogue that were main focus of the paper. The researcher used observation experience out of ten NGO’s and their activities to create framework for dialogue and collaboration.Keywords: collaboration, dialogue, community development, empowerment
Procedia PDF Downloads 5891715 The Psychometric Properties of an Instrument to Estimate Performance in Ball Tasks Objectively
Authors: Kougioumtzis Konstantin, Rylander Pär, Karlsteen Magnus
Abstract:
Ball skills as a subset of fundamental motor skills are predictors for performance in sports. Currently, most tools evaluate ball skills utilizing subjective ratings. The aim of this study was to examine the psychometric properties of a newly developed instrument to objectively measure ball handling skills (BHS-test) utilizing digital instrument. Participants were a convenience sample of 213 adolescents (age M = 17.1 years, SD =3.6; 55% females, 45% males) recruited from upper secondary schools and invited to a sports hall for the assessment. The 8-item instrument incorporated both accuracy-based ball skill tests and repetitive-performance tests with a ball. Testers counted performance manually in the four tests (one throwing and three juggling tasks). Furthermore, assessment was technologically enhanced in the other four tests utilizing a ball machine, a Kinect camera and balls with motion sensors (one balancing and three rolling tasks). 3D printing technology was used to construct equipment, while all results were administered digitally with smart phones/tablets, computers and a specially constructed application to send data to a server. The instrument was deemed reliable (α = .77) and principal component analysis was used in a random subset (53 of the participants). Furthermore, latent variable modeling was employed to confirm the structure with the remaining subset (160 of the participants). The analysis showed good factorial-related validity with one factor explaining 57.90 % of the total variance. Four loadings were larger than .80, two more exceeded .76 and the other two were .65 and .49. The one factor solution was confirmed by a first order model with one general factor and an excellent fit between model and data (χ² = 16.12, DF = 20; RMSEA = .00, CI90 .00–.05; CFI = 1.00; SRMR = .02). The loadings on the general factor ranged between .65 and .83. Our findings indicate good reliability and construct validity for the BHS-test. To develop the instrument further, more studies are needed with various age-groups, e.g. children. We suggest using the BHS-test for diagnostic or assessment purpose for talent development and sports participation interventions that focus on ball games.Keywords: ball-handling skills, ball-handling ability, technologically-enhanced measurements, assessment
Procedia PDF Downloads 941714 Investigating a Crack in Care: Assessing Long-Term Impacts of Child Abuse and Neglect
Authors: Remya Radhakrishnan, Hema Perinbanathan, Anukriti Rath, Reshmi Ramachandran, Rohith Thazhathuvetil Sasindrababu, Maria Karizhenskaia
Abstract:
Childhood adversities have lasting effects on health and well-being. This abstract explores the connection between adverse childhood experiences (ACEs) and health consequences, including substance abuse and obesity. Understanding the impact of childhood trauma and emphasizing the importance of culturally sensitive treatments and focused interventions help to mitigate these effects. Research consistently shows a strong link between ACEs and poor health outcomes. Our team conducted a comprehensive literature review of depression and anxiety in Canadian children and youth, exploring diverse treatment methods, including medical, psychotherapy, and alternative therapies like art and music therapy. We searched Medline, Google Scholar, and St. Lawrence College Library. Only original research papers, published between 2012 and 2023, peer-reviewed, and reporting on childhood adversities on health and its treatment methods in children and youth in Canada were considered. We focused on their significance in treating depression and anxiety. According to the study's findings, the prevalence of adverse childhood experiences (ACEs) is still a significant concern. In Canada, 40% of people report having had multiple ACEs, and 78% report having had at least one ACE, highlighting the persistence of childhood adversity and indicating that the issue is unlikely to fade off in the near future. Likewise, findings revealed that individuals who experienced abuse, neglect, or violence during childhood are likelier to engage in harmful behaviors like polydrug use, suicidal ideation, and victimization and suffer from mental health problems such as depression and post-traumatic stress disorder (PTSD).Keywords: adverse childhood experiences (ACEs), obesity, post-traumatic stress disorder (PTSD), resilience, substance abuse, trauma-informed care
Procedia PDF Downloads 1221713 The Application of the Biopsychosocial-Spiritual Model to the Quality of Life of People Living with Sickle Cell Disease
Authors: Anita Paddy, Millicent Obodai, Lebbaeus Asamani
Abstract:
The management of sickle cell disease requires a multidisciplinary team for better outcomes. Thus, literature on the application of the biopsychosocial model for the management and explanation of chronic pain in sickle cell disease (SCD) and other chronic diseases abound. However, there is limited research on the use of the biopsychosocial model, together with a spiritual component (biopsychosocial-spiritual model). The study investigated the extent to which healthcare providers utilized the biopsychosocial-spiritual model in the management of chronic pain to improve the quality of life (QoL) of patients with SCD. This study employed the descriptive survey design involving a consecutive sampling of 261 patients with SCD who were between the ages of 18 to 79 years and were accessing hematological services at the Clinical Genetics Department of the Korle Bu Teaching Hospital. These patients willingly consented to participate in the study by appending their signatures. The theory of integrated quality of life, the gate control theory of pain and the biopsychosocial(spiritual) model were tested. An instrument for the biopsychosocial-spiritual model was developed, with a basis from the literature reviewed, while the World Health Organisation Quality of Life BREF (WHOQoLBref) and the spirituality rating scale were adapted and used for data collection. Data were analyzed using descriptive statistics (means, standard deviations, frequencies, and percentages) and partial least square structural equation modeling. The study revealed that healthcare providers had a great leaning toward the biological domain of the model compared to the other domains. Hence, participants’ QoL was not fully improved as suggested by the biopsychosocial(spiritual) model. Again, the QoL and spirituality of patients with SCD were quite high. A significant negative impact of spirituality on QoL was also found. Finally, the biosocial domain of the biopsychosocial-spiritual model was the most significant predictor of QoL. It was recommended that policymakers train healthcare providers to integrate the psychosocial-spiritual component in health services. Also, education on SCD and its resultant impact from the domains of the model should be intensified while health practitioners consider utilizing these components fully in the management of the condition.Keywords: biopsychosocial (spritual), sickle cell disease, quality of life, healthcare, accra
Procedia PDF Downloads 751712 Interaction of Steel Slag and Zeolite on Ammonium Nitrogen Removal and Its Illumination on a New Carrier Filling Configuration for Constructed Wetlands
Authors: Hongtao Zhu, Dezhi Sun
Abstract:
Nitrogen and phosphorus are essential nutrients for biomass growth. But excessive nitrogen and phosphorus can contribute to accelerated eutrophication of lakes and rivers. Constructed wetland is an efficient and eco-friendly wastewater treatment technology with low operating cost and low-energy consumption. Because of high affinity with ammonium ion, zeolite, as a common substrate, is applied in constructed wetlands worldwide. Another substrate seen commonly for constructed wetlands is steel slag, which has high contents of Ca, Al, or Fe, and possesses a strong affinity with phosphate. Due to the excellent ammonium removal ability of zeolite and phosphate removal ability of steel slag, they were considered to be combined in the substrate bed of a constructed wetland in order to enhance the simultaneous removal efficiencies of nitrogen and phosphorus. In our early tests, zeolite and steel slag were combined with each other in order to simultaneously achieve a high removal efficiency of ammonium-nitrogen and phosphate-phosphorus. However, compared with the results when only zeolite was used, the removal efficiency of ammonia was sharply decreased when zeolite and steel slag were used together. The main objective of this study was to establish an overview of the interaction of steel slag and zeolite on ammonium nitrogen removal. The CaO dissolution from slag, as well as the effects of influencing parameters (i.e. pH and Ca2+ concentration) on the ammonium adsorption onto zeolite, was systematically studied. Modeling results of Ca2+ and OH- release from slag indicated that pseudo-second order reaction had a better fitness than pseudo-first order reaction. Changing pH value from 7 to 12 would result in a drastic reduction of the ammonium adsorption capacity on zeolite, from the peak at pH7. High Ca2+ concentration in solution could also inhibit the adsorption of ammonium onto zeolite. The mechanism for steel slag inhibiting the ammonium adsorption capacity of zeolite includes: on one hand, OH- released from steel slag can react with ammonium ions to produce molecular form ammonia (NH3∙H2O), which would cause the dissociation of NH4+ from zeolite. On the other hand, Ca2+ could replace the NH4+ ions to adhere onto the surface of zeolite. An innovative substrate filling configuration that zeolite and steel slag are placed sequentially was proposed to eliminate the disadvantageous effects of steel slag. Experimental results showed that the novel filling configuration was superior to the other two contrast filling configurations in terms of ammonium removal.Keywords: ammonium nitrogen, constructed wetlands, steel slag, zeolite
Procedia PDF Downloads 2551711 Smart Signature - Medical Communication without Barrier
Authors: Chia-Ying Lin
Abstract:
This paper explains how to enhance doctor-patient communication and nurse-patient communication through multiple intelligence signing methods and user-centered. It is hoped that through the implementation of the "electronic consent", the problems faced by the paper consent can be solved: storage methods, resource utilization, convenience, correctness of information, integrated management, statistical analysis and other related issues. Make better use and allocation of resources to provide better medical quality. First, invite the medical records department to assist in the inventory of paper consent in the hospital: organising, classifying, merging, coding, and setting. Second, plan the electronic consent configuration file: set the form number, consent form group, fields and templates, and the corresponding doctor's order code. Next, Summarize four types of rapid methods of electronic consent: according to the doctor's order, according to the medical behavior, according to the schedule, and manually generate the consent form. Finally, system promotion and adjustment: form an "electronic consent promotion team" to improve, follow five major processes: planning, development, testing, release, and feedback, and invite clinical units to raise the difficulties faced in the promotion, and make improvements to the problems. The electronic signature rate of the whole hospital will increase from 4% in January 2022 to 79% in November 2022. Use the saved resources more effectively, including: reduce paper usage (reduce carbon footprint), reduce the cost of ink cartridges, re-plan and use the space for paper medical records, and save human resources to provide better services. Through the introduction of information technology and technology, the main spirit of "lean management" is implemented. Transforming and reengineering the process to eliminate unnecessary waste is also the highest purpose of this project.Keywords: smart signature, electronic consent, electronic medical records, user-centered, doctor-patient communication, nurse-patient communication
Procedia PDF Downloads 1261710 National Assessment for Schools in Saudi Arabia: Score Reliability and Plausible Values
Authors: Dimiter M. Dimitrov, Abdullah Sadaawi
Abstract:
The National Assessment for Schools (NAFS) in Saudi Arabia consists of standardized tests in Mathematics, Reading, and Science for school grade levels 3, 6, and 9. One main goal is to classify students into four categories of NAFS performance (minimal, basic, proficient, and advanced) by schools and the entire national sample. The NAFS scoring and equating is performed on a bounded scale (D-scale: ranging from 0 to 1) in the framework of the recently developed “D-scoring method of measurement.” The specificity of the NAFS measurement framework and data complexity presented both challenges and opportunities to (a) the estimation of score reliability for schools, (b) setting cut-scores for the classification of students into categories of performance, and (c) generating plausible values for distributions of student performance on the D-scale. The estimation of score reliability at the school level was performed in the framework of generalizability theory (GT), with students “nested” within schools and test items “nested” within test forms. The GT design was executed via a multilevel modeling syntax code in R. Cut-scores (on the D-scale) for the classification of students into performance categories was derived via a recently developed method of standard setting, referred to as “Response Vector for Mastery” (RVM) method. For each school, the classification of students into categories of NAFS performance was based on distributions of plausible values for the students’ scores on NAFS tests by grade level (3, 6, and 9) and subject (Mathematics, Reading, and Science). Plausible values (on the D-scale) for each individual student were generated via random selection from a statistical logit-normal distribution with parameters derived from the student’s D-score and its conditional standard error, SE(D). All procedures related to D-scoring, equating, generating plausible values, and classification of students into performance levels were executed via a computer program in R developed for the purpose of NAFS data analysis.Keywords: large-scale assessment, reliability, generalizability theory, plausible values
Procedia PDF Downloads 211709 In Search of Commonalities in the Determinants of Child Sex Ratios in India and People's of Republic of China
Authors: Suddhasil Siddhanta, Debasish Nandy
Abstract:
Child sex ratios pattern in the Asian Population is highly masculine mainly due to birth masculinity and gender bias in child mortality. The vast and the growing literature of female deficit in world population points out the diffusion of child sex ratio pattern in many Asian as well as neighboring European countries. However, little attention has been given to understand the common factors in different demographics in explaining child sex ratio pattern. Such a scholarship is extremely important as level of gender inequity is different in different country set up. Our paper tries to explain the major structural commonalities in the child masculinity pattern in two demographic billionaires - India and China. The analysis reveals that apart from geographical diffusion of sex selection technology, patrilocal social structure, as proxied by households with more than one generation in China and proportion of population aged 65 years and above in India, can explain significant variation of missing girl child in these two countries. Even after controlling for individual capacity building factors like educational attainment, or work force participation, the measure of social stratification is coming out to be the major determinant of child sex ratio variation. Other socio economic factors that perform much well are the agency building factors of the females, like changing pattern of marriage customs which is proxied by divorce and remarriage ratio for china and percentage of female marrying at or after the age of 20 years in India and the female workforce participation. Proportion of minorities in socio-religious composition of the population and gender bias in scholastic attainment in both these counties are also found to be significant in modeling child sex ratio variations. All these significant common factors associated with child sex ratio point toward the one single most important factor: the historical evolution of patriarchy and its contemporary perpetuation in both the countries. It seems that prohibition of sex selection might not be sufficient to combat the peculiar skewness of excessive maleness in child population in both these countries. Demand sided policies is therefore utmost important to root out the gender bias in child sex ratios.Keywords: child sex ratios, gender bias, structural factors, prosperity, patrilocality
Procedia PDF Downloads 1571708 A Geographical Study of Women Status in an Emerging Urban Industrial Economy: Experiences from the Asansol Sub-Division and Durgapur Sub-Division of West Bengal, India
Authors: Mohana Basu, Snehamanju Basu
Abstract:
Urbanization has an immense impact on the holistic development of a region. In that same context, the level of women empowerment plays a significant role in the development of any region, particularly a region belonging to a developing country. The present study investigates the status of women empowerment in the Asansol Durgapur Planning Area of the state of West Bengal, India by investigating the status of women and their access to various facilities and awareness about the various governmental and non-governmental schemes meant for their elevation. Through this study, an attempt has been to made to understand the perception of the respondents on the context of women's empowerment. The study integrates multiple sources of qualitative and quantitative data collected from various reports, field-based measurements, questionnaire survey and community based participatory appraisals. Results reveal that women of the rural parts of the region are relatively disempowered due to the various restrictions imposed on them and enjoy lower socioeconomic clout than their male counterparts in spite of the several remedial efforts taken by the government and NGOs to elevate their position in the society. A considerable gender gap still exists regarding access to education, employment and decision-making power in the family and significant differences in attitude towards women are observable in the rural and urban areas. Freedom of women primarily vary according to their age group, educational level, employment and income status and also on the degree of urbanization. Asansol Durgapur Planning Area is primarily an industrial region where huge employment generation scope exists. But these disparities are quite alarming and indicate that economic development does not always usher in socially justifiable rights and access to resources for both men and women alike in its awake. In this backdrop, this study will attempt to forward relevant suggestions which can be followed for betterment of the status of women.Keywords: development, disempowered, economic development, urbanization, women empowerment
Procedia PDF Downloads 1461707 Military Leadership: Emotion Culture and Emotion Coping in Morally Stressful Situations
Authors: Sofia Nilsson, Alicia Ohlsson, Linda-Marie Lundqvist, Aida Alvinius, Peder Hyllengren, Gerry Larsson
Abstract:
In irregular warfare contexts, military personnel are often presented with morally ambiguous situations where they are aware of the morally correct choice but may feel prevented to follow through with it due to organizational demands. Moral stress and/or injury can be the outcome of the individual’s experienced dissonance. These types of challenges put a large demand on the individual to manage their own emotions and the emotions of others, particularly in the case of a leader. Both the ability and inability for emotional regulation can result in different combinations of short and long term reactions after morally stressful events, which can be either positive or negative. Our study analyzed the combination of these reactions based upon the types of morally challenging events that were described by the subjects. 1)What institutionalized norms concerning emotion regulation are favorable in short-and long-term perspectives after a morally stressful event? 2)What individual emotion-focused coping strategies are favorable in short-and long-perspectives after a morally stressful? To address these questions, we conducted a quantitative study in military contexts in Sweden and Norway on upcoming or current military officers (n=331). We tested a theoretical model built upon a recently developed qualitative study. The data was analyzed using factor analysis, multiple regression analysis and subgroup analyses. The results indicated that an individual’s restriction of emotion in order to achieve an organizational goal, which results in emotional dissonance, can be an effective short term strategy for both the individual and the organization; however, it appears to be unfavorable in a long-term perspective which can result in negative reactions. Our results are intriguing because they showed an increased percentage of reported negative long term reactions (13%), which indicated PTSD-related symptoms in comparison to previous Swedish studies which indicated lower PTSD symptomology.Keywords: emotion culture, emotion coping, emotion management, military
Procedia PDF Downloads 5991706 Reactive Power Control Strategy for Z-Source Inverter Based Reconfigurable Photovoltaic Microgrid Architectures
Authors: Reshan Perera, Sarith Munasinghe, Himali Lakshika, Yasith Perera, Hasitha Walakadawattage, Udayanga Hemapala
Abstract:
This research presents a reconfigurable architecture for residential microgrid systems utilizing Z-Source Inverter (ZSI) to optimize solar photovoltaic (SPV) system utilization and enhance grid resilience. The proposed system addresses challenges associated with high solar power penetration through various modes, including current control, voltage-frequency control, and reactive power control. It ensures uninterrupted power supply during grid faults, providing flexibility and reliability for grid-connected SPV customers. Challenges and opportunities in reactive power control for microgrids are explored, with simulation results and case studies validating proposed strategies. From a control and power perspective, the ZSI-based inverter enhances safety, reduces failures, and improves power quality compared to traditional inverters. Operating seamlessly in grid-connected and islanded modes guarantees continuous power supply during grid disturbances. Moreover, the research addresses power quality issues in long distribution feeders during off-peak and night-peak hours or fault conditions. Using the Distributed Static Synchronous Compensator (DSTATCOM) for voltage stability, the control objective is nighttime voltage regulation at the Point of Common Coupling (PCC). In this mode, disconnection of PV panels, batteries, and the battery controller allows the ZSI to operate in voltage-regulating mode, with critical loads remaining connected. The study introduces a structured controller for Reactive Power Controlling mode, contributing to a comprehensive and adaptable solution for residential microgrid systems. Mathematical modeling and simulations confirm successful maximum power extraction, controlled voltage, and smooth voltage-frequency regulation.Keywords: reconfigurable architecture, solar photovoltaic, microgrids, z-source inverter, STATCOM, power quality, battery storage system
Procedia PDF Downloads 171705 Family Medicine Residents in End-of-Life Care
Authors: Goldie Lynn Diaz, Ma. Teresa Tricia G. Bautista, Elisabeth Engeljakob, Mary Glaze Rosal
Abstract:
Introduction: Residents are expected to convey unfavorable news, discuss prognoses, and relieve suffering, and address do-not-resuscitate orders, yet some report a lack of competence in providing this type of care. Recognizing this need, Family Medicine residency programs are incorporating end-of-life care from symptom and pain control, counseling, and humanistic qualities as core proficiencies in training. Objective: This study determined the competency of Family Medicine Residents from various institutions in Metro Manila on rendering care for the dying. Materials and Methods: Trainees completed a Palliative Care Evaluation tool to assess their degree of confidence in patient and family interactions, patient management, and attitudes towards hospice care. Results: Remarkably, only a small fraction of participants were confident in performing independent management of terminal delirium and dyspnea. Fewer than 30% of residents can do the following without supervision: discuss medication effects and patient wishes after death, coping with pain, vomiting and constipation, and reacting to limited patient decision-making capacity. Half of the respondents had confidence in supporting the patient or family member when they become upset. Majority expressed confidence in many end-of-life care skills if supervision, coaching and consultation will be provided. Most trainees believed that pain medication should be given as needed to terminally ill patients. There was also uncertainty as to the most appropriate person to make end-of-life decisions. These attitudes may be influenced by personal beliefs rooted in cultural upbringing as well as by personal experiences with death in the family, which may also affect their participation and confidence in caring for the dying. Conclusion: Enhancing the quality and quantity of end-of-life care experiences during residency with sufficient supervision and role modeling may lead to knowledge and skill improvement to ensure quality of care. Fostering bedside learning opportunities during residency is an appropriate venue for teaching interventions in end-of-life care education.Keywords: end of life care, geriatrics, palliative care, residency training skill
Procedia PDF Downloads 2571704 A Review of Benefit-Risk Assessment over the Product Lifecycle
Authors: M. Miljkovic, A. Urakpo, M. Simic-Koumoutsaris
Abstract:
Benefit-risk assessment (BRA) is a valuable tool that takes place in multiple stages during a medicine's lifecycle, and this assessment can be conducted in a variety of ways. The aim was to summarize current BRA methods used during approval decisions and in post-approval settings and to see possible future directions. Relevant reviews, recommendations, and guidelines published in medical literature and through regulatory agencies over the past five years have been examined. BRA implies the review of two dimensions: the dimension of benefits (determined mainly by the therapeutic efficacy) and the dimension of risks (comprises the safety profile of a drug). Regulators, industry, and academia have developed various approaches, ranging from descriptive textual (qualitative) to decision-analytic (quantitative) models, to facilitate the BRA of medicines during the product lifecycle (from Phase I trials, to authorization procedure, post-marketing surveillance and health technology assessment for inclusion in public formularies). These approaches can be classified into the following categories: stepwise structured approaches (frameworks); measures for benefits and risks that are usually endpoint specific (metrics), simulation techniques and meta-analysis (estimation techniques), and utility survey techniques to elicit stakeholders’ preferences (utilities). All these approaches share the following two common goals: to assist this analysis and to improve the communication of decisions, but each is subject to its own specific strengths and limitations. Before using any method, its utility, complexity, the extent to which it is established, and the ease of results interpretation should be considered. Despite widespread and long-time use, BRA is subject to debate, suffers from a number of limitations, and currently is still under development. The use of formal, systematic structured approaches to BRA for regulatory decision-making and quantitative methods to support BRA during the product lifecycle is a standard practice in medicine that is subject to continuous improvement and modernization, not only in methodology but also in cooperation between organizations.Keywords: benefit-risk assessment, benefit-risk profile, product lifecycle, quantitative methods, structured approaches
Procedia PDF Downloads 1601703 Study of the Design and Simulation Work for an Artificial Heart
Authors: Mohammed Eltayeb Salih Elamin
Abstract:
This study discusses the concept of the artificial heart using engineering concepts, of the fluid mechanics and the characteristics of the non-Newtonian fluid. For the purpose to serve heart patients and improve aspects of their lives and since the Statistics review according to world health organization (WHO) says that heart disease and blood vessels are the first cause of death in the world. Statistics shows that 30% of the death cases in the world by the heart disease, so simply we can consider it as the number one leading cause of death in the entire world is heart failure. And since the heart implantation become a very difficult and not always available, the idea of the artificial heart become very essential. So it’s important that we participate in the developing this idea by searching and finding the weakness point in the earlier designs and hoping for improving it for the best of humanity. In this study a pump was designed in order to pump blood to the human body and taking into account all the factors that allows it to replace the human heart, in order to work at the same characteristics and the efficiency of the human heart. The pump was designed on the idea of the diaphragm pump. Three models of blood obtained from the blood real characteristics and all of these models were simulated in order to study the effect of the pumping work on the fluid. After that, we study the properties of this pump by using Ansys15 software to simulate blood flow inside the pump and the amount of stress that it will go under. The 3D geometries modeling was done using SOLID WORKS and the geometries then imported to Ansys design modeler which is used during the pre-processing procedure. The solver used throughout the study is Ansys FLUENT. This is a tool used to analysis the fluid flow troubles and the general well-known term used for this branch of science is known as Computational Fluid Dynamics (CFD). Basically, Design Modeler used during the pre-processing procedure which is a crucial step before the start of the fluid flow problem. Some of the key operations are the geometry creations which specify the domain of the fluid flow problem. Next is mesh generation which means discretization of the domain to solve governing equations at each cell and later, specify the boundary zones to apply boundary conditions for the problem. Finally, the pre–processed work will be saved at the Ansys workbench for future work continuation.Keywords: Artificial heart, computational fluid dynamic heart chamber, design, pump
Procedia PDF Downloads 4601702 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters
Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran
Abstract:
The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.Keywords: electric propulsion, mass gauging, propellant, PVT, xenon
Procedia PDF Downloads 3461701 Training to Evaluate Creative Activity in a Training Context, Analysis of a Learner Evaluation Model
Authors: Massy Guillaume
Abstract:
Introduction: The implementation of creativity in educational policies or curricula raises several issues, including the evaluation of creativity and the means to do so. This doctoral research focuses on the appropriation and transposition of creativity assessment models by future teachers. Our objective is to identify the elements of the models that are most transferable to practice in order to improve their implementation in the students' curriculum while seeking to create a new model for assessing creativity in the school environment. Methods: In order to meet our objective, this preliminary quantitative exploratory study by questionnaire was conducted at two points in the participants' training: at the beginning of the training module and throughout the practical work. The population is composed of 40 people of diverse origins with an average age of 26 (s:8,623) years. In order to be as close as possible to our research objective and to test our questionnaires, we set up a pre-test phase during the spring semester of 2022. Results: The results presented focus on aspects of the OECD Creative Competencies Assessment Model. Overall, 72% of participants support the model's focus on skill levels as appropriate for the school context. More specifically, the data indicate that the separation of production and process in the rubric facilitates observation by the assessor. From the point of view of transposing the grid into teaching practice, the participants emphasised that production is easier to plan and observe in students than in the process. This difference is reinforced by a lack of knowledge about certain concepts such as innovation or risktaking in schools. Finally, the qualitative results indicate that the addition of multiple levels of competencies to the OECD rubric would allow for better implementation in the classroom. Conclusion: The identification by the students of the elements allowing the evaluation of creativity in the school environment generates an innovative approach to the training contents. These first data, from the test phase of our research, demonstrate the difficulty that exists between the implementation of an evaluation model in a training program and its potential transposition by future teachers.Keywords: creativity, evaluation, schooling, training
Procedia PDF Downloads 951700 Using ICESat-2 Dynamic Ocean Topography to Estimate Western Arctic Freshwater Content
Authors: Joshua Adan Valdez, Shawn Gallaher
Abstract:
Global climate change has impacted atmospheric temperatures contributing to rising sea levels, decreasing sea ice, and increased freshening of high latitude oceans. This freshening has contributed to increased stratification inhibiting local mixing and nutrient transport, modifying regional circulations in polar oceans. In recent years, the Western Arctic has seen an increase in freshwater volume at an average rate of 397+-116km3/year across the Beaufort Gyre. The majority of the freshwater volume resides in the Beaufort Gyre surface lens driven by anticyclonic wind forcing, sea ice melt, and Arctic river runoff, and is typically defined as water fresher than 34.8. The near-isothermal nature of Arctic seawater and non-linearities in the equation of state for near-freezing waters result in a salinity-driven pycnocline as opposed to the temperature-driven density structure seen in the lower latitudes. In this study, we investigate the relationship between freshwater content and dynamic ocean topography (DOT). In situ measurements of freshwater content are useful in providing information on the freshening rate of the Beaufort Gyre; however, their collection is costly and time-consuming. Utilizing NASA’s ICESat-2’s DOT remote sensing capabilities and Air Expendable CTD (AXCTD) data from the Seasonal Ice Zone Reconnaissance Surveys (SIZRS), a linear regression model between DOT and freshwater content is determined along the 150° west meridian. Freshwater content is calculated by integrating the volume of water between the surface and a depth with a reference salinity of ~34.8. Using this model, we compare interannual variability in freshwater content within the gyre, which could provide a future predictive capability of freshwater volume changes in the Beaufort-Chukchi Sea using non-in situ methods. Successful employment of the ICESat-2’s DOT approximation of freshwater content could potentially demonstrate the value of remote sensing tools to reduce reliance on field deployment platforms to characterize physical ocean properties.Keywords: Cryosphere, remote sensing, Arctic oceanography, climate modeling, Ekman transport
Procedia PDF Downloads 781699 Study and Simulation of a Sever Dust Storm over West and South West of Iran
Authors: Saeed Farhadypour, Majid Azadi, Habibolla Sayyari, Mahmood Mosavi, Shahram Irani, Aliakbar Bidokhti, Omid Alizadeh Choobari, Ziba Hamidi
Abstract:
In the recent decades, frequencies of dust events have increased significantly in west and south west of Iran. First, a survey on the dust events during the period (1990-2013) is investigated using historical dust data collected at 6 weather stations scattered over west and south-west of Iran. After statistical analysis of the observational data, one of the most severe dust storm event that occurred in the region from 3rd to 6th July 2009, is selected and analyzed. WRF-Chem model is used to simulate the amount of PM10 and how to transport it to the areas. The initial and lateral boundary conditions for model obtained from GFS data with 0.5°×0.5° spatial resolution. In the simulation, two aerosol schemas (GOCART and MADE/SORGAM) with 3 options (chem_opt=106,300 and 303) were evaluated. Results of the statistical analysis of the historical data showed that south west of Iran has high frequency of dust events, so that Bushehr station has the highest frequency between stations and Urmia station has the lowest frequency. Also in the period of 1990 to 2013, the years 2009 and 1998 with the amounts of 3221 and 100 respectively had the highest and lowest dust events and according to the monthly variation, June and July had the highest frequency of dust events and December had the lowest frequency. Besides, model results showed that the MADE / SORGAM scheme has predicted values and trends of PM10 better than the other schemes and has showed the better performance in comparison with the observations. Finally, distribution of PM10 and the wind surface maps obtained from numerical modeling showed that the formation of dust plums formed in Iraq and Syria and also transportation of them to the West and Southwest of Iran. In addition, comparing the MODIS satellite image acquired on 4th July 2009 with model output at the same time showed the good ability of WRF-Chem in simulating spatial distribution of dust.Keywords: dust storm, MADE/SORGAM scheme, PM10, WRF-Chem
Procedia PDF Downloads 2721698 Development of Ketorolac Tromethamine Encapsulated Stealth Liposomes: Pharmacokinetics and Bio Distribution
Authors: Yasmin Begum Mohammed
Abstract:
Ketorolac tromethamine (KTM) is a non-steroidal anti-inflammatory drug with a potent analgesic and anti-inflammatory activity due to prostaglandin related inhibitory effect of drug. It is a non-selective cyclo-oxygenase inhibitor. The drug is currently used orally and intramuscularly in multiple divided doses, clinically for the management arthritis, cancer pain, post-surgical pain, and in the treatment of migraine pain. KTM has short biological half-life of 4 to 6 hours, which necessitates frequent dosing to retain the action. The frequent occurrence of gastrointestinal bleeding, perforation, peptic ulceration, and renal failure lead to the development of other drug delivery strategies for the appropriate delivery of KTM. The ideal solution would be to target the drug only to the cells or tissues affected by the disease. Drug targeting could be achieved effectively by liposomes that are biocompatible and biodegradable. The aim of the study was to develop a parenteral liposome formulation of KTM with improved efficacy while reducing side effects by targeting the inflammation due to arthritis. PEG-anchored (stealth) and non-PEG-anchored liposomes were prepared by thin film hydration technique followed by extrusion cycle and characterized for in vitro and in vivo. Stealth liposomes (SLs) exhibited increase in percent encapsulation efficiency (94%) and 52% percent of drug retention during release studies in 24 h with good stability for a period of 1 month at -20°C and 4°C. SLs showed about maximum 55% of edema inhibition with significant analgesic effect. SLs produced marked differences over those of non-SL formulations with an increase in area under plasma concentration time curve, t₁/₂, mean residence time, and reduced clearance. 0.3% of the drug was detected in arthritic induced paw with significantly reduced drug localization in liver, spleen, and kidney for SLs when compared to other conventional liposomes. Thus SLs help to increase the therapeutic efficacy of KTM by increasing the targeting potential at the inflammatory region.Keywords: biodistribution, ketorolac tromethamine, stealth liposomes, thin film hydration technique
Procedia PDF Downloads 2951697 Effect of Different Factors on Temperature Profile and Performance of an Air Bubbling Fluidized Bed Gasifier for Rice Husk Gasification
Authors: Dharminder Singh, Sanjeev Yadav, Pravakar Mohanty
Abstract:
In this work, study of temperature profile in a pilot scale air bubbling fluidized bed (ABFB) gasifier for rice husk gasification was carried out. Effects of different factors such as multiple cyclones, gas cooling system, ventilate gas pipe length, and catalyst on temperature profile was examined. ABFB gasifier used in this study had two sections, one is bed section and the other is freeboard section. River sand was used as bed material with air as gasification agent, and conventional charcoal as start-up heating medium in this gasifier. Temperature of different point in both sections of ABFB gasifier was recorded at different ER value and ER value was changed by changing the feed rate of biomass (rice husk) and by keeping the air flow rate constant for long durational of gasifier operation. ABFB with double cyclone with gas coolant system and with short length ventilate gas pipe was found out to be optimal gasifier design to give temperature profile required for high gasification performance in long duration operation. This optimal design was tested with different ER values and it was found that ER of 0.33 was most favourable for long duration operation (8 hr continuous operation), giving highest carbon conversion efficiency. At optimal ER of 0.33, bed temperature was found to be stable at 700 °C, above bed temperature was found to be at 628.63 °C, bottom of freeboard temperature was found to be at 600 °C, top of freeboard temperature was found to be at 517.5 °C, gas temperature was found to be at 195 °C, and flame temperature was found to be 676 °C. Temperature at all the points showed fluctuations of 10 – 20 °C. Effect of catalyst i.e. dolomite (20% with sand bed) was also examined on temperature profile, and it was found that at optimal ER of 0.33, the bed temperature got increased to 795 °C, above bed temperature got decreased to 523 °C, bottom of freeboard temperature got decreased to 548 °C, top of freeboard got decreased to 475 °C, gas temperature got decreased to 220 °C, and flame temperature got increased to 703 °C. Increase in bed temperature leads to higher flame temperature due to presence of more hydrocarbons generated from more tar cracking at higher temperature. It was also found that the use of dolomite with sand bed eliminated the agglomeration in the reactor at such high bed temperature (795 °C).Keywords: air bubbling fluidized bed gasifier, bed temperature, charcoal heating, dolomite, flame temperature, rice husk
Procedia PDF Downloads 2791696 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance
Authors: Rajinder Singh, Ram Valluru
Abstract:
Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility
Procedia PDF Downloads 1331695 Peptide-Gold Nanocluster as an Optical Biosensor for Glycoconjugate Secreted from Leishmania
Authors: Y. A. Prada, Fanny Guzman, Rafael Cabanzo, John J. Castillo, Enrique Mejia-Ospino
Abstract:
In this work, we show the important results about of synthesis of photoluminiscents gold nanoclusters using a small peptide as template for biosensing applications. Interestingly, we design one peptide (NBC2854) homologue to conservative domain from 215 250 residue of a galactolectin protein which can recognize the proteophosphoglycans (PPG) from Leishmania. Peptide was synthetized by multiple solid phase synthesis using FMoc group methodology in acid medium. Finally, the peptide was purified by High-Performance Liquid Chromatography using a Vydac C-18 preparative column and the detection was at 215 nm using a Photo Diode Array detector. Molecular mass of this peptide was confirmed by MALDI-TOF and to verify the α-helix structure we use Circular Dichroism. By means of the methodology used we obtained a novel fluorescents gold nanoclusters (AuNC) using NBC2854 as a template. In this work, we described an easy and fast microsonic method for the synthesis of AuNC with ≈ 3.0 nm of hydrodynamic size and photoemission at 630 nm. The presence of cysteine residue in the C-terminal of the peptide allows the formation of Au-S bond which confers stability to Peptide-based gold nanoclusters. Interactions between the peptide and gold nanoclusters were confirmed by X-ray Photoemission and Raman Spectroscopy. Notably, from the ultrafine spectra shown in the MALDI-TOF analysis which containing only 3-7 KDa species was assigned to Au₈-₁₈[NBC2854]₂ clusters. Finally, we evaluated the Peptide-gold nanocluster as an optical biosensor based on fluorescence spectroscopy and the fluorescence signal of PPG (0.1 µg-mL⁻¹ to 1000 µg-mL⁻¹) was amplified at the same wavelength emission (≈ 630 nm). This can suggest that there is a strong interaction between PPG and Pep@AuNC, therefore, the increase of the fluorescence intensity can be related to the association mechanism that take place when the target molecule is sensing by the Pep@AuNC conjugate. Further spectroscopic studies are necessary to evaluate the fluorescence mechanism involve in the sensing of the PPG by the Pep@AuNC. To our best knowledge the fabrication of an optical biosensor based on Pep@AuNC for sensing biomolecules such as Proteophosphoglycans which are secreted in abundance by parasites Leishmania.Keywords: biosensing, fluorescence, Leishmania, peptide-gold nanoclusters, proteophosphoglycans
Procedia PDF Downloads 1691694 The Influence of Contextual Factors on Long-Term Contraceptive Use in East Java
Authors: Ni'mal Baroya, Andrei Ramani, Irma Prasetyowati
Abstract:
The access to reproduction health services, including with safe and effective contraception were human rights regardless of social stratum and residence. In addition to individual factors, family and contextual factors were also believed to be the cause in the use of contraceptive methods. This study aimed to assess the determinants of long-term contraceptive methods (LTCM) by considering all the factors at either the individual level or contextual level. Thereby, this study could provide basic information for program development of prevalence enhancement of MKJP in East Java. The research, which used cross-sectional design, utilized Riskesdas 2013 data, particularly in East Java Province for further analysis about multilevel modeling of MKJP application. The sample of this study consisted of 20.601 married women who were not in pregnant that were drawn by using probability sampling following the sampling technique of Riskesdas 2013. Variables in this study were including the independent variables at the individual level that consisted of education, age, occupation, access to family planning services (KB), economic status and residence. As independent variables in district level were the Human Development Index (HDI, henceforth as IPM) in each districts of East Java Province, the ratio of field officers, the ratio of midwives, the ratio of community health centers and the ratio of doctors. As for the dependent variable was the use of Long-Term Contraceptive Method (LTCM or MKJP). The data were analyzed by using chi-square test and Pearson product moment correlation. The multivariable analysis was using multilevel logistic regression with 95% of Confidence Interval (CI) at the significance level of p < 0.05 and 80% of strength test. The results showed a low CPR LTCM was concentrated in districts in Madura Island and the north coast. The women which were 25 to 35 or more than 35 years old, at least high school education, working, and middle-class social status were more likely to use LTCM or MKJP. The IPM and low PLKB ratio had implications for poor CPR LTCM / MKJP.Keywords: multilevel, long-term contraceptive methods, east java, contextual factor
Procedia PDF Downloads 2451693 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals
Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor
Abstract:
This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers
Procedia PDF Downloads 761692 The Effect of Implant Design on the Height of Inter-Implant Bone Crest: A 10-Year Retrospective Study of the Astra Tech Implant and Branemark Implant
Authors: Daeung Jung
Abstract:
Background: In case of patients with missing teeth, multiple implant restoration has been widely used and is inevitable. To increase its survival rate, it is important to understand the influence of different implant designs on inter-implant crestal bone resorption. There are several implant systems designed to minimize loss of crestal bone, and the Astra Tech and Brånemark Implant are two of them. Aim/Hypothesis: The aim of this 10-year study was to compare the height of inter-implant bone crest in two implant systems; the Astra Tech and the Brånemark implant system. Material and Methods: In this retrospective study, 40 consecutively treated patients were utilized; 23 patients with 30 sites for Astra Tech system and 17 patients with 20 sites for Brånemark system. The implant restoration was comprised of splinted crown in partially edentulous patients. Radiographs were taken immediately after 1st surgery, at impression making, at prosthetics setting, and annually after loading. Lateral distance from implant to bone crest, inter-implant distance was gauged, and crestal bone height was measured from the implant shoulder to the first bone contact. Calibrations were performed with known length of thread pitch distance for vertical measurement, and known diameter of abutment or fixture for horizontal measurement using ImageJ. Results: After 10 years, patients treated with Astra Tech implant system demonstrated less inter-implant crestal bone resorption when implants had a distance of 3mm or less between them. In cases of implants that had a greater than 3 mm distance between them, however, there appeared to be no statistically significant difference in crestal bone loss between two systems. Conclusion and clinical implications: In the situation of partially edentulous patients planning to have more than two implants, the inter-implant distance is one of the most important factors to be considered. If it is impossible to make sure of having sufficient inter-implant distance, the implants with less micro gap in the fixture-abutment junction, less traumatic 2nd surgery approach, and the adequate surface topography would be choice of appropriate options to minimize inter-implant crestal bone resorption.Keywords: implant design, crestal bone loss, inter-implant distance, 10-year retrospective study
Procedia PDF Downloads 166