Search results for: non-inertial reference frames
232 Leuco Dye-Based Thermochromic Systems for Application in Temperature Sensing
Authors: Magdalena Wilk-Kozubek, Magdalena Rowińska, Krzysztof Rola, Joanna Cybińska
Abstract:
Leuco dye-based thermochromic systems are classified as intelligent materials because they exhibit thermally induced color changes. Thanks to this feature, they are mainly used as temperature sensors in many industrial sectors. For example, placing a thermochromic material on a chemical reactor may warn about exceeding the maximum permitted temperature for a chemical process. Usually two components, a color former and a developer are needed to produce a system with irreversible color change. The color former is an electron donating (proton accepting) compound such as fluoran leuco dye. The developer is an electron accepting (proton donating) compound such as organic carboxylic acid. When the developer melts, the color former - developer complex is created and the termochromic system becomes colored. Typically, the melting point of the applied developer determines the temperature at which the color change occurs. When the lactone ring of the color former is closed, then the dye is in its colorless state. The ring opening, induced by the addition of a proton, causes the dye to turn into its colored state. Since the color former and the developer are often solid, they can be incorporated into polymer films to facilitate their practical use in industry. The objective of this research was to fabricate a leuco dye-based termochromic system that will irreversibly change color after reaching the temperature of 100°C. For this purpose, benzofluoran leuco dye (as color former) and phenoxyacetic acid (as developer with a melting point of 100°C) were introduced into the polymer films during the drop casting process. The film preparation process was optimized in order to obtain thin films with appropriate properties such as transparency, flexibility and homogeneity. Among the optimized factors were the concentration of benzofluoran leuco dye and phenoxyacetic acid, the type, average molecular weight and concentration of the polymer, and the type and concentration of the surfactant. The selected films, containing benzofluoran leuco dye and phenoxyacetic acid, were combined by mild heat treatment. Structural characterization of single and combined films was carried out by FTIR spectroscopy, morphological analysis was performed by optical microscopy and SEM, phase transitions were examined by DSC, color changes were investigated by digital photography and UV-Vis spectroscopy, while emission changes were studied by photoluminescence spectroscopy. The resulting thermochromic system is colorless at room temperature, but after reaching 100°C the developer melts and it turns irreversibly pink. Therefore, it could be used as an additional sensor to warn against boiling of water in power plants using water cooling. Currently used electronic temperature indicators are prone to faults and unwanted third-party actions. The sensor constructed in this work is transparent, thanks to which it can be unnoticed by an outsider and constitute a reliable reference for the person responsible for the apparatus.Keywords: color developer, leuco dye, thin film, thermochromism
Procedia PDF Downloads 99231 Comparative Study for Neonatal Outcome and Umbilical Cord Blood Gas Parameters in Balanced and Inhalant Anesthesia for Elective Cesarean Section in Dogs
Authors: Agnieszka Antończyk, MałGorzata Ochota, Wojciech Niżański, ZdzisłAw Kiełbowicz
Abstract:
The goal of the cesarean section (CS) is the delivery of healthy, vigorous pups with the provision of surgical plane anesthesia, appropriate analgesia, and rapid recovery of the dam. In human medicine, spinal or epidural anesthesia is preferred for a cesarean section as associated with a lower risk of neonatal asphyxia and the need for resuscitation. Nevertheless, the specificity of veterinary patients makes the application of regional anesthesia as a sole technique impractical, thus to obtain patient compliance the general anesthesia is required. This study aimed to compare the influence of balanced (inhalant with epidural) and inhalant anesthesia on neonatal umbilical cord blood gas (UCBG) parameters and vitality (modified Apgar scoring). The bitches (31) undergoing elective CS were enrolled in this study. All females received a single dose of 0.2 mg/kg s.c. Meloxicam. Females were randomly assigned into two groups: Gr I (Isoflurane, n=16) and Gr IE (Isoflurane plus Epidural, n=15). Anesthesia was induced with propofol at 4-6 mg/kg to effect, and maintained with isoflurane in oxygen; in IE group epidural anesthesia was also done using lidocaine (3-4 mg/kg) into the lumbosacral space. CSs were performed using a standard mid-line approach. Directly after the puppy extraction, the umbilical cord was double clamped before the placenta detachment. The vessels were gently stretched between forceps to allow blood sampling. At least 100 mcl of mixed umbilical cord blood was collected into a heparinized syringe for further analysis. The modified Apgar scoring system (AS) was used to objectively score neonatal health and vitality immediately after birth (before first aid or neonatal care was instituted), at 5 and 20 min after birth. The neonates were scored as normal (AS 7-10), weak (AS 4-6), or critical (AS 0-3). During surgery, the IE group required a lower isoflurane concentration compared to the females in group I (MAC 1.05±0.2 and 1.4±0.13, respectively, p<0.01). All investigated UCBG parameters were not statistically different between groups. All pups had mild acidosis (pH 7.21±0.08 and 7.21±0.09 in Gr I and IE, respectively) with moderately elevated pCO2 (Gr I 57.18±11.48, Gr IE 58.74±15.07), HCO3- on the lower border (Gr I 22.58±3.24, Gr IE 22.83±3.6), lowered BE (Gr I -6.1±3.57, Gr IE -5.6±4.19) and mildly elevated level of lactates (Gr I 2.58±1.48, Gr IE2.53±1.03). The glucose levels were above the reference limits in both groups of puppies (74.50±25.32 in Gr I, 79.50±29.73 in Gr IE). The initial Apgar score results were similar in I and IE groups. However, the subsequent measurements of AS revealed significant differences between both groups. Puppies from the IE group received better AS scores at 5 and 20 min compared to the I group (6.86±2.23 and 8.06±2.06 vs 5.11±2.40 and 7.83±2.05, respectively). The obtained results demonstrated that administration of epidural anesthesia reduced the requirement for isoflurane in dams undergoing cesarean section and did not affect the neonatal umbilical blood gas results. Moreover, newborns from the epidural anesthesia group were scored significantly higher in AS at 5 and 20 min, indicating their better vitality and quicker improvement post-surgery.Keywords: apgar scoring, balanced anesthesia, cesarean section, umbilical blood gas
Procedia PDF Downloads 177230 Distribution and Diversity of Pyrenocarpous Lichens in India with Special Reference to Forest Health
Authors: Gaurav Kumar Mishra, Sanjeeva Nayaka, Dalip Kumar Upreti
Abstract:
Our nature exhibited presence of a number of unique plants which can be used as indicator of environmental condition of particular place. Lichens are unique plant which has an ability to absorb not only organic, inorganic and metaloties but also absorb radioactive nuclide substances present in the environment. In the present study pyrenocarpous lichens will used as indicator of good forest health in a particular place. The Pyrenocarpous lichens are simple crust forming with black dot like perithecia have few characters for their taxonomical segregation as compared to their foliose and fruticose brethrean. The thallus colour and nature, presence and absence of hypothallus are only few characters of thallus are used to segregate the pyrenocarpous taxa. The fruiting bodies of pyrenolichens i.e. ascocarps are perithecia. The perithecia and the contents found within them posses many important criteria for the segregation of pyrenocarpous lichen taxa. The ascocarp morphology, ascocarp arrangement, the perithecial wall, ascocarp shape and colour, ostiole shape and position, ostiole colour, ascocarp anatomy including type of paraphyses, asci shape and size, ascospores septation, ascospores wall and periphyses are the valuable charcters used for segregation of different pyrenocarpous lichen taxa. India is represented by the occurrence of the 350 species of 44 genera and eleven families. Among the different genera Pyrenula is dominant with 82 species followed by the Porina with 70 species. Recently, systematic of the pyrenocarpous lichens have been revised by American and European lichenologists using phylogenetic methods. Still the taxonomy of pyrenocarpous lichens is in flux and information generated after the completion of this study will play vital role in settlement of the taxonomy of this peculiar group of lichens worldwide. The Indian Himalayan region exhibit rich diversity of pyrenocarpous lichens in India. The western Himalayan region has luxuriance of pyrenocarpous lichens due to its unique topography and climate condition. However, the eastern Himalayan region has rich diversity of pyrenocarpous lichens due to its warmer and moist climate condition. The rich moist and warmer climate in eastern Himalayan region supports forest with dominance of evergreen tree vegetation. The pyrenocarpous lichens communities are good indicator of young and regenerated forest type. The rich diversity of lichens clearly indicates that moist of the forest within the eastern Himalayan region has good health of forest. Due to fast pace of urbanization and other developmental activities will defiantly have adverse effects on the diversity and distribution of pyrenocarpous lichens in different forest type and the present distribution pattern will act as baseline data for carried out future biomonitoring studies in the area.Keywords: lichen diversity, indicator species, environmental factors, pyrenocarpous
Procedia PDF Downloads 147229 Antimicrobial Value of Olax subscorpioidea and Bridelia ferruginea on Micro-Organism Isolates of Dental Infection
Authors: I. C. Orabueze, A. A. Amudalat, S. A. Adesegun, A. A. Usman
Abstract:
Dental and associated oral diseases are increasingly affecting a considerable portion of the population and are considered some of the major causes of tooth loss, discomfort, mouth odor and loss of confidence. This study focused on the ethnobotanical survey of medicinal plants used in oral therapy and evaluation of the antimicrobial activities of methanolic extracts of two selected plants from the survey for their efficacy against dental microorganisms. The ethnobotanical survey was carried out in six herbal markets in Lagos State, Nigeria by oral interviewing and information obtained from an old family manually complied herbal medication book. Methanolic extracts of Olax subscorpioidea (stem bark) and Bridelia ferruginea (stem bark) were assayed for their antimicrobial activities against clinical oral isolates (Aspergillus fumigatus, Candida albicans, Streptococcus spp, Staphylococcus aureus, Lactobacillus acidophilus and Pseudomonas aeruginosa). In vitro microbial technique (agar well diffusion method and minimum inhibitory concentration (MIC) assay) were employed for the assay. Chlorhexidine gluconate was used as the reference drug for comparison with the extract results. And the preliminary phytochemical screening of the constituents of the plants were done. The ethnobotanical survey produced plants (28) of diverse family. Different parts of plants (seed, fruit, leaf, root, bark) were mentioned but 60% mentioned were either the stem or the bark. O. subscorpioidea showed considerable antifungal activity with zone of inhibition ranging from 2.650 – 2.000 cm against Aspergillus fumigatus but no such encouraging inhibitory activity was observed in the other assayed organisms. B. ferruginea showed antibacterial sensitivity against Streptococcus spp, Staphylococcus aureus, Lactobacillus acidophilus and Pseudomonas aeruginosa with zone of inhibitions ranging from 3.400 - 2.500, 2.250 - 1.600, 2.700 - 1.950, 2.225 – 1.525 cm respectively. The minimum inhibitory concentration of O. subscorpioidea against Aspergillus fumigatus was 51.2 mg ml-1 while that of B. ferruginea against Streptococcus spp was 0.1mg ml-1 and for Staphylococcus aureus, Lactobacillus acidophilus and Pseudomonas aeruginosa were 25.6 mg ml-1. A phytochemical analysis reveals the presence of alkaloids, saponins, cardiac glycoside, tannins, phenols and terpenoids in both plants, with steroids only in B. ferruginea. No toxicity was observed among mice given the two methanolic extracts (1000 mg Kg-1) after 21 days. The barks of both plants exhibited antimicrobial properties against periodontal diseases causing organisms assayed, thus up-holding their folkloric use in oral disorder management. Further research could be done viewing these extracts as combination therapy, checking for possible synergistic value in toothpaste and oral rinse formulations for reducing oral bacterial flora and fungi load.Keywords: antimicrobial activities, Bridelia ferruginea, dental disinfection, methanolic extract, Olax subscorpioidea, ethnobotanical survey
Procedia PDF Downloads 244228 A Correlations Study on Nursing Staff's Shifts Systems, Workplace Fatigue, and Quality of Working Life
Authors: Jui Chen Wu, Ming Yi Hsu
Abstract:
Background and Purpose: Shift work of nursing staff is inevitable in hospital to provide continuing medical care. However, shift work is considered as a health hazard that may cause physical and psychological problems. Serious workplace fatigue of nursing shift work might impact on family, social and work life, moreover, causes serious reduction of quality of medical care, or even malpractice. This study aims to explore relationships among nursing staff’s shift, workplace fatigue and quality of working life. Method: Structured questionnaires were used in this study to explore relationships among shift work, workplace fatigue and quality of working life in nursing staffs. We recruited 590 nursing staffs in different Community Teaching hospitals in Taiwan. Data analysed by descriptive statistics, single sample t-test, single factor analysis, Pearson correlation coefficient and hierarchical regression, etc. Results: The overall workplace fatigue score is 50.59 points. In further analysis, the score of personal burnout, work-related burnout, over-commitment and client-related burnout are 57.86, 53.83, 45.95 and 44.71. The basic attributes of nursing staff are significantly different from those of workplace fatigue with different ages, licenses, sleeping quality, self-conscious health status, number of care patients of chronic diseases and number of care people in the obstetric ward. The shift variables revealed no significant influence on workplace fatigue during the hierarchical regression analysis. About the analysis on nursing staff’s basic attributes and shift on the quality of working life, descriptive results show that the overall quality of working life of nursing staff is 3.23 points. Comparing the average score of the six aspects, the ranked average score are 3.47 (SD= .43) in interrelationship, 3.40 (SD= .46) in self-actualisation, 3.30 (SD= .40) in self-efficacy, 3.15 (SD= .38) in vocational concept, 3.07 (SD= .37) in work aspects, and 3.02 (SD= .56) in organization aspects. The basic attributes of nursing staff are significantly different from quality of working life in different marriage situations, education level, years of nursing work, occupation area, sleep quality, self-conscious health status and number of care in medical ward. There are significant differences between shift mode and shift rate with the quality of working life. The results of the hierarchical regression analysis reveal that one of the shifts variables 'shift mode' which does affect staff’s quality of working life. The workplace fatigue is negatively correlated with the quality of working life, and the over-commitment in the workplace fatigue is positively related to the vocational concept of the quality of working life. According to the regression analysis of nursing staff’s basic attributes, shift mode, workplace fatigue and quality of working life related shift, the results show that the workplace fatigue has a significant impact on nursing staff’s quality of working life. Conclusion: According to our study, shift work is correlated with workplace fatigue in nursing staffs. This results work as important reference for human resources management in hospitals to establishing a more positive and healthy work arrangement policy.Keywords: nursing staff, shift, workplace fatigue, quality of working life
Procedia PDF Downloads 272227 Impact of Maternal Nationality on Caesarean Section Rate Variation in a High-income Country
Authors: Saheed Shittu, Lolwa Alansari, Fahed Nattouf, Tawa Olukade, Naji Abdallah, Tamara Alshdafat, Sarra Amdouni
Abstract:
Cesarean sections (CS), a highly regarded surgical intervention for improving fetal-maternal outcomes and serving as an integral part of emergency obstetric services, are not without complications. Although CS has many advantages, it poses significant risks to both mother and child and increases healthcare expenditures in the long run. The escalating global prevalence of CS, coupled with variations in rates among immigrant populations, has prompted an inquiry into the correlation between CS rates and the nationalities of women undergoing deliveries at Al-Wakra Hospital (AWH), Qatar's second-largest public maternity hospital. This inquiry is motivated by the notable CS rate of 36%, deemed high in comparison to the 34% recorded across other Hamad Medical Corporation (HMC) maternity divisions This is Qatar's first comprehensive investigation of Caesarean section rates and nationalities. A retrospective cross-sectional study was conducted, and data for all births delivered in 2019 were retrieved from the hospital's electronic medical records. The CS rate, the crude rate, and adjusted risks of Caesarean delivery for mothers from each nationality were determined. The common indications for CS were analysed based on nationality. The association between nationality and Caesarean rates was examined using binomial logistic regression analysis considering Qatari women as a standard reference group. The correlation between the CS rate in the country of nationality and the observed CS rate in Qatar was also examined using Pearson's correlation. This study included 4,816 births from 69 different nationalities. CS was performed in 1767 women, equating to 36.5%. The nationalities with the highest CS rates were Egyptian (49.6%), Lebanese (45.5%), Filipino and Indian (both 42.2%). Qatari women recorded a CS rate of 33.4%. The major indication for elective CS was previous multiple CS (39.9%) and one prior CS, where the patient declined vaginal birth after the cesarean (VBAC) option (26.8%). A distinct pattern was noticed: elective CS was predominantly performed on Arab women, whereas emergency CS was common among women of Asian and Sub-Saharan African nationalities. Moreover, a significant correlation was found between the CS rates in Qatar and the women's countries of origin. Also, a high CS rate was linked to instances of previous CS. As a result of these insights, strategic interventions were successfully implemented at the facility to mitigate unwarranted CS, resulting in a notable reduction in CS rate from 36.5% in 2019 to 34% in 2022. This proves the efficacy of the meticulously researched approach. The focus has now shifted to reducing primary CS rates and facilitating well-informed decisions regarding childbirth methods.Keywords: maternal nationality, caesarean section rate variation, migrants, high-income country
Procedia PDF Downloads 70226 A Robust Stretchable Bio Micro-Electromechanical Systems Technology for High-Strain in vitro Cellular Studies
Authors: Tiffany Baetens, Sophie Halliez, Luc Buée, Emiliano Pallecchi, Vincent Thomy, Steve Arscott
Abstract:
We demonstrate here a viable stretchable bio-microelectromechanical systems (BioMEMS) technology for use with biological studies concerned with the effect of high mechanical strains on living cells. An example of this is traumatic brain injury (TBI) where neurons are damaged with physical force to the brain during, e.g., accidents and sports. Robust, miniaturized integrated systems are needed by biologists to be able to study the effect of TBI on neuron cells in vitro. The major challenges in this area are (i) to develop micro, and nanofabrication processes which are based on stretchable substrates and to (ii) create systems which are robust and performant at very high mechanical strain values—sometimes as high as 100%. At the time of writing, such processes and systems were rapidly evolving subject of research and development. The BioMEMS which we present here is composed of an elastomer substrate (low Young’s modulus ~1 MPa) onto which is patterned robust electrodes and insulators. The patterning of the thin films is achieved using standard photolithography techniques directly on the elastomer substrate—thus making the process generic and applicable to many materials’ in based systems. The chosen elastomer used is commercial ‘Sylgard 184’ polydimethylsiloxane (PDMS). It is spin-coated onto a silicon wafer. Multistep ultra-violet based photolithography involving commercial photoresists are then used to pattern robust thin film metallic electrodes (chromium/gold) and insulating layers (parylene) on the top of the PDMS substrate. The thin film metals are deposited using thermal evaporation and shaped using lift-off techniques The BioMEMS has been characterized mechanically using an in-house strain-applicator tool. The system is composed of 12 electrodes with one reference electrode transversally-orientated to the uniaxial longitudinal straining of the system. The electrical resistance of the electrodes is observed to remain very stable with applied strain—with a resistivity approaching that of evaporated gold—up to an interline strain of ~50%. The mechanical characterization revealed some interesting original properties of such stretchable BioMEMS. For example, a Poisson effect induced electrical ‘self-healing’ of cracking was identified. Biocompatibility of the commercial photoresist has been studied and is conclusive. We will present the results of the BioMEMS, which has also characterized living cells with a commercial Multi Electrode Array (MEA) characterization tool (Multi Channel Systems, USA). The BioMEMS enables the cells to be strained up to 50% and then characterized electrically and optically.Keywords: BioMEMS, elastomer, electrical impedance measurements of living cells, high mechanical strain, microfabrication, stretchable systems, thin films, traumatic brain injury
Procedia PDF Downloads 145225 Female Mystics in Medieval Muslim Societies in the Period between the Ninth and Thirteenth Centuries
Authors: Arin Salamah Qudsi
Abstract:
Female piety and the roles that female mystics played in Muslim landscapes of the period between the ninth and thirteenth centuries are topics that attracted many scholarly endeavors. However, personal aspects of both male and female Sufis were not thoroughly investigated. It would be of a great significance to examine the different roles of Sufi women as spouses, household supporters, and, mothers based on Sufi and non Sufi sources. Sisters and mothers, rather than wives and daughters, are viewed in anthropological studies of different cultures as women who could enjoy a high social status and thus play influential roles. Sufi hagiographies, which are our main sources, have long been regarded in a negative light, and their value for our understanding of the early history of Sufism is held in doubt. More recently, however, a new scholarly voice has begun to reclaim the historical value of hagiographies. We need to approach the narrative structures and styles of the anecdotal segments, which are the building blocks of the hagiographical body of writing. The image of a particular Sufi figure as portrayed by his near-contemporaries can provide a more useful means to sketch the components of his unique piety than his real life. However, in certain cases, whenever singular and unique appearances of particular stories occur, certain historical and individual conclusions could be sought. As for women in Sufi hagiographies, we know about sisters who acted as a solid support for their renowned Sufi brothers. Some of those sisters preferred not to be married until a late age in order to "serve" their brothers, while others supported their brothers while pursuing their own spiritual careers. Data of this type should be carefully considered and its historical context should be thoroughly investigated. The reference here is to women, mostly married women, who offered to maintain their brothers or male relatives despite social norms or generic prohibitions, which undoubtedly gave them strong authority over them. As for mothers, we should differentiate between mothers who were Sufis themselves, and those who were the mothers of Sufi figures. It seems most likely that in both types, mothers were not always unquestionably the effective lightening trigger. Mothers of certain Sufi figures denied their sons free mobility, taking advantage of the highly esteemed principle of gratifying the wishes of one's mother and the seminal ideal of ḥaqq al-wālida (lit. mother's right). Drawing on the anecdotes provided by a few sources leads to the suggestion that many Sufis actually strove to reduce their mothers' authority in order to establish their independent careers. In light of women's authority over their brothers and sons in Sufi spheres, maternal uncles could enjoy a crucial position of influence over their nephews. The roles of Sufi mothers and of Sufi maternal uncles in the lives of early Sufi figures are topics that have not yet been dealt with in modern scholarship on classical Sufism.Keywords: female Sufis, hagiographies, maternal uncles, mother's right
Procedia PDF Downloads 334224 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions
Authors: Gaurangi Saxena, Ravindra Saxena
Abstract:
Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.Keywords: cloud computing, competitive advantage, customer relationship management, grid computing
Procedia PDF Downloads 312223 Gender Differences in Morbid Obese Children: Clinical Significance of Two Diagnostic Obesity Notation Model Assessment Indices
Authors: Mustafa M. Donma, Orkide Donma, Murat Aydin, Muhammet Demirkol, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu
Abstract:
Childhood obesity is an ever increasing global health problem, affecting both developed and developing countries. Accurate evaluation of obesity in children requires difficult and detailed investigation. In our study, obesity in children was evaluated using new body fat ratios and indices. Assessment of anthropometric measurements, as well as some ratios, is important because of the evaluation of gender differences particularly during the late periods of obesity. A total of 239 children; 168 morbid obese (MO) (81 girls and 87 boys) and 71 normal weight (NW) (40 girls and 31 boys) children, participated in the study. Informed consent forms signed by the parents were obtained. Ethics Committee approved the study protocol. Mean ages (years)±SD calculated for MO group were 10.8±2.9 years in girls and 10.1±2.4 years in boys. The corresponding values for NW group were 9.0±2.0 years in girls and 9.2±2.1 years in boys. Mean body mass index (BMI)±SD values for MO group were 29.1±5.4 kg/m2 and 27.2±3.9 kg/m2 in girls and boys, respectively. These values for NW group were calculated as 15.5±1.0 kg/m2 in girls and 15.9±1.1 kg/m2 in boys. Groups were constituted based upon BMI percentiles for age-and-sex values recommended by WHO. Children with percentiles >99 were grouped as MO and children with percentiles between 85 and 15 were considered NW. The anthropometric measurements were recorded and evaluated along with the new ratios such as trunk-to-appendicular fat ratio, as well as indices such as Index-I and Index-II. The body fat percent values were obtained by bio-electrical impedance analysis. Data were entered into a database for analysis using SPSS/PASW 18 Statistics for Windows statistical software. Increased waist-to-hip circumference (C) ratios, decreased head-to-neck C, height ‘to’ ‘two’-‘to’-waist C and height ‘to’ ‘two’-‘to’-hip C ratios were observed in parallel with the development of obesity (p≤0.001). Reference value for height ‘to’ ‘two’-‘to’-hip ratio was detected as approximately 1.0. Index-II, based upon total body fat mass, showed much more significant differences between the groups than Index-I based upon weight. There was not any difference between trunk-to-appendicular fat ratios of NW girls and NW boys (p≥0.05). However, significantly increased values for MO girls in comparison with MO boys were observed (p≤0.05). This parameter showed no difference between NW and MO states in boys (p≥0.05). However, statistically significant increase was noted in MO girls compared to their NW states (p≤0.001). Trunk-to-appendicular fat ratio was the only fat-based parameter, which showed gender difference between NW and MO groups. This study has revealed that body ratios and formula based upon body fat tissue are more valuable parameters than those based on weight and height values for the evaluation of morbid obesity in children.Keywords: anthropometry, childhood obesity, gender, morbid obesity
Procedia PDF Downloads 325222 Detection of Egg Proteins in Food Matrices (2011-2021)
Authors: Daniela Manila Bianchi, Samantha Lupi, Elisa Barcucci, Sandra Fragassi, Clara Tramuta, Lucia Decastelli
Abstract:
Introduction: The undeclared allergens detection in food products plays a fundamental role in the safety of the allergic consumer. The protection of allergic consumers is guaranteed, in Europe, by Regulation (EU) No 1169/2011 of the European Parliament, which governs the consumer's right to information and identifies 14 food allergens to be mandatorily indicated on food labels: among these, an egg is included. An egg can be present as an ingredient or as contamination in raw and cooked products. The main allergen egg proteins are ovomucoid, ovalbumin, lysozyme, and ovotransferrin. This study presents the results of a survey conducted in Northern Italy aimed at detecting the presence of undeclared egg proteins in food matrices in the latest ten years (2011-2021). Method: In the period January 2011 - October 2021, a total of 1205 different types of food matrices (ready-to-eat, meats, and meat products, bakery and pastry products, baby foods, food supplements, pasta, fish and fish products, preparations for soups and broths) were delivered to Food Control Laboratory of Istituto Zooprofilattico Sperimentale of Piemonte Liguria and Valle d’Aosta to be analyzed as official samples in the frame of Regional Monitoring Plan of Food Safety or in the contest of food poisoning. The laboratory is ISO 17025 accredited, and since 2019, it has represented the National Reference Centre for the detection in foods of substances causing food allergies or intolerances (CreNaRiA). All samples were stored in the laboratory according to food business operator instructions and analyzed within the expiry date for the detection of undeclared egg proteins. Analyses were performed with RIDASCREEN®FAST Ei/Egg (R-Biopharm ® Italia srl) kit: the method was internally validated and accredited with a Limit of Detection (LOD) equal to 2 ppm (mg/Kg). It is a sandwich enzyme immunoassay for the quantitative analysis of whole egg powder in foods. Results: The results obtained through this study showed that egg proteins were found in 2% (n. 28) of food matrices, including meats and meat products (n. 16), fish and fish products (n. 4), bakery and pastry products (n. 4), pasta (n. 2), preparations for soups and broths (n.1) and ready-to-eat (n. 1). In particular, in 2011 egg proteins were detected in 5% of samples, in 2012 in 4%, in 2013, 2016 and 2018 in 2%, in 2014, 2015 and 2019 in 3%. No egg protein traces were detected in 2017, 2020, and 2021. Discussion: Food allergies occur in the Western World in 2% of adults and up to 8% of children. Allergy to eggs is one of the most common food allergies in the pediatrics context. The percentage of positivity obtained from this study is, however, low. The trend over the ten years has been slightly variable, with comparable data.Keywords: allergens, food, egg proteins, immunoassay
Procedia PDF Downloads 136221 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration
Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu
Abstract:
Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery
Procedia PDF Downloads 129220 The Environmental Concerns in Coal Mining, and Utilization in Pakistan
Authors: S. R. H. Baqri, T. Shahina, M. T. Hasan
Abstract:
Pakistan is facing acute shortage of energy and looking for indigenous resources of the energy mix to meet the short fall. After the discovery of huge coal resources in Thar Desert of Sindh province, focus has shifted to coal power generation. The government of Pakistan has planned power generation of 20000 MW on coal by the year 2025. This target will be achieved by mining and power generation in Thar coal Field and on imported coal in different parts of Pakistan. Total indigenous coal production of around 3.0 million tons is being utilized in brick kilns, cement and sugar industry. Coal-based power generation is only limited to three units of 50 MW near Hyderabad from nearby Lakhra Coal field. The purpose of this presentation is to identify and redressal of issues of coal mining and utilization with reference to environmental hazards. Thar coal resource is estimated at 175 billion tons out of a total resource estimate of 184 billion tons in Pakistan. Coal of Pakistan is of Tertiary age (Palaeocene/Eocene) and classified from lignite to sub-bituminous category. Coal characterization has established three main pollutants such as Sulphur, Carbon dioxide and Methane besides some others associated with coal and rock types. The element Sulphur occurs in organic as well as inorganic forms associated with coals as free sulphur and as pyrite, gypsum, respectively. Carbon dioxide, methane and minerals are mostly associated with fractures, joints local faults, seatearth and roof rocks. The abandoned and working coal mines give kerosene odour due to escape of methane in the atmosphere. While the frozen methane/methane ices in organic matter rich sediments have also been reported from the Makran coastal and offshore areas. The Sulphur escapes into the atmosphere during mining and utilization of coal in industry. The natural erosional processes due to rivers, streams, lakes and coastal waves erode over lying sediments allowing pollutants to escape into air and water. Power plants emissions should be controlled through application of appropriate clean coal technology and need to be regularly monitored. Therefore, the systematic and scientific studies will be required to estimate the quantity of methane, carbon dioxide and sulphur at various sites such as abandoned and working coal mines, exploratory wells for coal, oil and gas. Pressure gauges on gas pipes connecting the coal-bearing horizons will be installed on surface to know the quantity of gas. The quality and quantity of gases will be examined according to the defined intervals of times. This will help to design and recommend the methods and procedures to stop the escape of gases into atmosphere. The element of Sulphur can be removed partially by gravity and chemical methods after grinding and before industrial utilization of coal.Keywords: atmosphere, coal production, energy, pollutants
Procedia PDF Downloads 435219 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis
Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia
Abstract:
Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation
Procedia PDF Downloads 65218 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 141217 Electroactive Fluorene-Based Polymer Films Obtained by Electropolymerization
Authors: Mariana-Dana Damaceanu
Abstract:
Electrochemical oxidation is one of the most convenient ways to obtain conjugated polymer films as polypyrrole, polyaniline, polythiophene or polycarbazole. The research in the field has been mainly directed to the study of electrical conduction properties of the materials obtained by electropolymerization, often the main reason being their use as electroconducting electrodes, and very little attention has been paid to the morphological and optical quality of the films electrodeposited on flat surfaces. Electropolymerization of the monomer solution was scarcely used in the past to manufacture polymer-based light-emitting diodes (PLED), most probably due to the difficulty of obtaining defectless polymer films with good mechanical and optical properties, or conductive polymers with well controlled molecular weights. Here we report our attempts in using electrochemical deposition as appropriate method for preparing ultrathin films of fluorene-based polymers for PLED applications. The properties of these films were evaluated in terms of structural morphology, optical properties, and electrochemical conduction. Thus, electropolymerization of 4,4'-(9-fluorenylidene)-dianiline was performed in dichloromethane solution, at a concentration of 10-2 M, using 0.1 M tetrabutylammonium tetrafluoroborate as electrolyte salt. The potential was scanned between 0 and 1.3 V on the one hand, and 0 - 2 V on the other hand, when polymer films with different structures and properties were obtained. Indium tin oxide-coated glass substrate of different size was used as working electrode, platinum wire as counter electrode and calomel electrode as reference. For each potential range 100 cycles were recorded at a scan rate of 100 mV/s. The film obtained in the potential range from 0 to 1.3 V, namely poly(FDA-NH), is visible to the naked eye, being light brown, transparent and fluorescent, and displays an amorphous morphology. Instead, the electrogrowth poly(FDA) film in the potential range of 0 - 2 V is yellowish-brown and opaque, presenting a self-assembled structure in aggregates of irregular shape and size. The polymers structure was identified by FTIR spectroscopy, which shows the presence of broad bands specific to a polymer, the band centered at approx. 3443 cm-1 being ascribed to the secondary amine. The two polymer films display two absorption maxima, at 434-436 nm assigned to π-π* transitions of polymers, and another at 832 and 880 nm assigned to polaron transitions. The fluorescence spectra indicated the presence of emission bands in the blue domain, with two peaks at 422 and 488 nm for poly (FDA-NH), and four narrow peaks at 422, 447, 460 and 484 nm for poly(FDA), peaks originating from fluorene-containing segments of varying degrees of conjugation. Poly(FDA-NH) exhibited two oxidation peaks in the anodic region and the HOMO energy value of 5.41 eV, whereas poly(FDA) showed only one oxidation peak and the HOMO level localized at 5.29 eV. The electrochemical data are discussed in close correlation with the proposed chemical structure of the electrogrowth films. Further research will be carried out to study their use and performance in light-emitting devices.Keywords: electrogrowth polymer films, fluorene, morphology, optical properties
Procedia PDF Downloads 345216 Performance Validation of Model Predictive Control for Electrical Power Converters of a Grid Integrated Oscillating Water Column
Authors: G. Rajapakse, S. Jayasinghe, A. Fleming
Abstract:
This paper aims to experimentally validate the control strategy used for electrical power converters in grid integrated oscillating water column (OWC) wave energy converter (WEC). The particular OWC’s unidirectional air turbine-generator output power results in discrete large power pulses. Therefore, the system requires power conditioning prior to integrating to the grid. This is achieved by using a back to back power converter with an energy storage system. A Li-Ion battery energy storage is connected to the dc-link of the back-to-back converter using a bidirectional dc-dc converter. This arrangement decouples the system dynamics and mitigates the mismatch between supply and demand powers. All three electrical power converters used in the arrangement are controlled using finite control set-model predictive control (FCS-MPC) strategy. The rectifier controller is to regulate the speed of the turbine at a set rotational speed to uphold the air turbine at a desirable speed range under varying wave conditions. The inverter controller is to maintain the output power to the grid adhering to grid codes. The dc-dc bidirectional converter controller is to set the dc-link voltage at its reference value. The software modeling of the OWC system and FCS-MPC is carried out in the MATLAB/Simulink software using actual data and parameters obtained from a prototype unidirectional air-turbine OWC developed at Australian Maritime College (AMC). The hardware development and experimental validations are being carried out at AMC Electronic laboratory. The designed FCS-MPC for the power converters are separately coded in Code Composer Studio V8 and downloaded into separate Texas Instrument’s TIVA C Series EK-TM4C123GXL Launchpad Evaluation Boards with TM4C123GH6PMI microcontrollers (real-time control processors). Each microcontroller is used to drive 2kW 3-phase STEVAL-IHM028V2 evaluation board with an intelligent power module (STGIPS20C60). The power module consists of a 3-phase inverter bridge with 600V insulated gate bipolar transistors. Delta standard (ASDA-B2 series) servo drive/motor coupled to a 2kW permanent magnet synchronous generator is served as the turbine-generator. This lab-scale setup is used to obtain experimental results. The validation of the FCS-MPC is done by comparing these experimental results to the results obtained by MATLAB/Simulink software results in similar scenarios. The results show that under the proposed control scheme, the regulated variables follow their references accurately. This research confirms that FCS-MPC fits well into the power converter control of the OWC-WEC system with a Li-Ion battery energy storage.Keywords: dc-dc bidirectional converter, finite control set-model predictive control, Li-ion battery energy storage, oscillating water column, wave energy converter
Procedia PDF Downloads 113215 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 101214 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy
Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu
Abstract:
The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis
Procedia PDF Downloads 65213 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images
Authors: Shenlun Chen, Leonard Wee
Abstract:
Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.Keywords: colorectal cancer, differentiation, survival analysis, tumor grading
Procedia PDF Downloads 134212 Analysis and Comparison of Asymmetric H-Bridge Multilevel Inverter Topologies
Authors: Manel Hammami, Gabriele Grandi
Abstract:
In recent years, multilevel inverters have become more attractive for single-phase photovoltaic (PV) systems, due to their known advantages over conventional H-bridge pulse width-modulated (PWM) inverters. They offer improved output waveforms, smaller filter size, lower total harmonic distortion (THD), higher output voltages and others. The most common multilevel converter topologies, presented in literature, are the neutral-point-clamped (NPC), flying capacitor (FC) and Cascaded H-Bridge (CHB) converters. In both NPC and FC configurations, the number of components drastically increases with the number of levels what leads to complexity of the control strategy, high volume, and cost. Whereas, increasing the number of levels in case of the cascaded H-bridge configuration is a flexible solution. However, it needs isolated power sources for each stage, and it can be applied to PV systems only in case of PV sub-fields. In order to improve the ratio between the number of output voltage levels and the number of components, several hybrids and asymmetric topologies of multilevel inverters have been proposed in the literature such as the FC asymmetric H-bridge (FCAH) and the NPC asymmetric H-bridge (NPCAH) topologies. Another asymmetric multilevel inverter configuration that could have interesting applications is the cascaded asymmetric H-bridge (CAH), which is based on a modular half-bridge (two switches and one capacitor, also called level doubling network, LDN) cascaded to a full H-bridge in order to double the output voltage level. This solution has the same number of switches as the above mentioned AH configurations (i.e., six), and just one capacitor (as the FCAH). CAH is becoming popular, due to its simple, modular and reliable structure, and it can be considered as a retrofit which can be added in series to an existing H-Bridge configuration in order to double the output voltage levels. In this paper, an original and effective method for the analysis of the DC-link voltage ripple is given for single-phase asymmetric H-bridge multilevel inverters based on level doubling network (LDN). Different possible configurations of the asymmetric H-Bridge multilevel inverters have been considered and the analysis of input voltage and current are analytically determined and numerically verified by Matlab/Simulink for the case of cascaded asymmetric H-bridge multilevel inverters. A comparison between FCAH and the CAH configurations is done on the basis of the analysis of the DC and voltage ripple for the DC source (i.e., the PV system). The peak-to-peak DC and voltage ripple amplitudes are analytically calculated over the fundamental period as a function of the modulation index. On the basis of the maximum peak-to-peak values of low frequency and switching ripple voltage components, the DC capacitors can be designed. Reference is made to unity output power factor, as in case of most of the grid-connected PV generation systems. Simulation results will be presented in the full paper in order to prove the effectiveness of the proposed developments in all the operating conditions.Keywords: asymmetric inverters, dc-link voltage, level doubling network, single-phase multilevel inverter
Procedia PDF Downloads 207211 Influence of Disintegration of Sida hermaphrodita Silage on Methane Fermentation Efficiency
Authors: Marcin Zielinski, Marcin Debowski, Paulina Rusanowska, Magda Dudek
Abstract:
As a result of sonification, the destruction of complex biomass structures results in an increase in the biogas yield from the conditioned material. First, the amount of organic matter released into the solution due to disintegration was determined. This parameter was determined by changes in the carbon content in liquid phase of the conditioned substrate. The amount of carbon in the liquid phase increased with the prolongation of the sonication time to 16 min. Further increase in the duration of sonication did not cause a statistically significant increase in the amount of organic carbon in the liquid phase. The disintegrated material was then used for respirometric measurements for determination of the impact of the conditioning process used on methane fermentation effectiveness. The relationship between the amount of energy introduced into the lignocellulosic substrate and the amount of biogas produced has been demonstrated. Statistically significant increase in the amount of biogas was observed until sonication of 16 min. Further increase in energy in the conditioning process did not significantly increase the production of biogas from the treated substrate. The biogas production from the conditioned substrate was 17% higher than from the reference biomass at that time. The ultrasonic disintegration method did not significantly affect the observed biogas composition. In all series, the methane content in the produced biogas from the conditioned substrate was similar to that obtained with the raw substrate sample (51.1%). Another method of substrate conditioning was hydrothermal depolymerization. This method consists in application of increased temperature and pressure to substrate. These phenomena destroy the structure of the processed material, the release of organic compounds to the solution, which should lead to increase the amount of produced biogas from such treated biomass. The hydrothermal depolymerization was conducted using an innovative microwave heating method. Control measurements were performed using conventional heating. The obtained results indicate the relationship between depolymerization temperature and the amount of biogas. Statistically significant value of the biogas production coefficients increased as the depolymerization temperature increased to 150°C. Further raising the depolymerization temperature to 180°C did not significantly increase the amount of produced biogas in the respirometric tests. As a result of the hydrothermal depolymerization obtained using microwave at 150°C for 20 min, the rate of biogas production from the Sida silage was 780 L/kg VS, which accounted for nearly 50% increase compared to 370 L/kg VS obtained from the same silage but not depolymerised. The study showed that by microwave heating it is possible to effectively depolymerized substrate. Significant differences occurred especially in the temperature range of 130-150ºC. The pre-treatment of Sida hermaphrodita silage (biogas substrate) did not significantly affect the quality of the biogas produced. The methane concentration was about 51.5% on average. The study was carried out in the framework of the project under program BIOSTRATEG funded by the National Centre for Research and Development No. 1/270745/2/NCBR/2015 'Dietary, power, and economic potential of Sida hermaphrodita cultivation on fallow land'.Keywords: disintegration, biogas, methane fermentation, Virginia fanpetals, biomass
Procedia PDF Downloads 309210 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures
Authors: Haytam Kasem
Abstract:
The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model
Procedia PDF Downloads 239209 ENDO-β-1,4-Xylanase from Thermophilic Geobacillus stearothermophilus: Immobilization Using Matrix Entrapment Technique to Increase the Stability and Recycling Efficiency
Authors: Afsheen Aman, Zainab Bibi, Shah Ali Ul Qader
Abstract:
Introduction: Xylan is a heteropolysaccharide composed of xylose monomers linked together through 1,4 linkages within a complex xylan network. Owing to wide applications of xylan hydrolytic products (xylose, xylobiose and xylooligosaccharide) the researchers are focusing towards the development of various strategies for efficient xylan degradation. One of the most important strategies focused is the use of heat tolerant biocatalysts which acts as strong and specific cleaving agents. Therefore, the exploration of microbial pool from extremely diversified ecosystem is considerably vital. Microbial populations from extreme habitats are keenly explored for the isolation of thermophilic entities. These thermozymes usually demonstrate fast hydrolytic rate, can produce high yields of product and are less prone to microbial contamination. Another possibility of degrading xylan continuously is the use of immobilization technique. The current work is an effort to merge both the positive aspects of thermozyme and immobilization technique. Methodology: Geobacillus stearothermophilus was isolated from soil sample collected near the blast furnace site. This thermophile is capable of producing thermostable endo-β-1,4-xylanase which cleaves xylan effectively. In the current study, this thermozyme was immobilized within a synthetic and a non-synthetic matrice for continuous production of metabolites using entrapment technique. The kinetic parameters of the free and immobilized enzyme were studied. For this purpose calcium alginate and polyacrylamide beads were prepared. Results: For the synthesis of immobilized beads, sodium alginate (40.0 gL-1) and calcium chloride (0.4 M) was used amalgamated. The temperature (50°C) and pH (7.0) optima of immobilized enzyme remained same for xylan hydrolysis however, the enzyme-substrate catalytic reaction time raised from 5.0 to 30.0 minutes as compared to free counterpart. Diffusion limit of high molecular weight xylan (corncob) caused a decline in Vmax of immobilized enzyme from 4773 to 203.7 U min-1 whereas, Km value increased from 0.5074 to 0.5722 mg ml-1 with reference to free enzyme. Immobilized endo-β-1,4-xylanase showed its stability at high temperatures as compared to free enzyme. It retained 18% and 9% residual activity at 70°C and 80°C, respectively whereas; free enzyme completely lost its activity at both temperatures. The Immobilized thermozyme displayed sufficient recycling efficiency and can be reused up to five reaction cycles, indicating that this enzyme can be a plausible candidate in paper processing industry. Conclusion: This thermozyme showed better immobilization yield and operational stability with the purpose of hydrolyzing the high molecular weight xylan. However, the enzyme immobilization properties can be improved further by immobilizing it on different supports for industrial purpose.Keywords: immobilization, reusability, thermozymes, xylanase
Procedia PDF Downloads 374208 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry
Authors: C. A. Barros, Ana P. Barroso
Abstract:
Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis
Procedia PDF Downloads 214207 Developing Dynamic Capabilities: The Case of Western Subsidiaries in Emerging Market
Authors: O. A. Adeyemi, M. O. Idris, W. A. Oke, O. T. Olorode, S. O. Alayande, A. E. Adeoye
Abstract:
The purpose of this paper is to investigate the process of capability building at subsidiary level and the challenges to such process. The relevance of external factors for capability development, have not been explicitly addressed in empirical studies. Though, internal factors, acting as enablers, have been more extensively studied. With reference to external factors, subsidiaries are actively influenced by specific characteristics of the host country, implying a need to become fully immersed in local culture and practices. Specifically, in MNCs, there has been a widespread trend in management practice to increase subsidiary autonomy, with subsidiary managers being encouraged to act entrepreneurially, and to take advantage of host country specificity. As such, it could be proposed that: P1: The degree at which subsidiary management is connected to the host country, will positively influence the capability development process. Dynamic capabilities reside to a large measure with the subsidiary management team, but are impacted by the organizational processes, systems and structures that the MNC headquarter has designed to manage its business. At the subsidiary level, the weight of the subsidiary in the network, its initiative-taking and its profile building increase the supportive attention of the HQs and are relevant to the success of the process of capability building. Therefore, our second proposition is that: P2: Subsidiary role and HQ support are relevant elements in capability development at the subsidiary level. Design/Methodology/Approach: This present study will adopt the multiple case studies approach. That is because a case study research is relevant when addressing issues without known empirical evidences or with little developed prior theory. The key definitions and literature sources directly connected with operations of western subsidiaries in emerging markets, such as China, are well established. A qualitative approach, i.e., case studies of three western subsidiaries, will be adopted. The companies have similar products, they have operations in China, and both of them are mature in their internationalization process. Interviews with key informants, annual reports, press releases, media materials, presentation material to customers and stakeholders, and other company documents will be used as data sources. Findings: Western Subsidiaries in Emerging Market operate in a way substantially different from those in the West. What are the conditions initiating the outsourcing of operations? The paper will discuss and present two relevant propositions guiding that process. Practical Implications: MNCs headquarter should be aware of the potential for capability development at the subsidiary level. This increased awareness could induce consideration in headquarter about the possible ways of encouraging such known capability development and how to leverage these capabilities for better MNC headquarter and/or subsidiary performance. Originality/Value: The paper is expected to contribute on the theme: drivers of subsidiary performance with focus on emerging market. In particular, it will show how some external conditions could promote a capability-building process within subsidiaries.Keywords: case studies, dynamic capability, emerging market, subsidiary
Procedia PDF Downloads 122206 The Effectiveness of an Occupational Therapy Metacognitive-Functional Intervention for the Improvement of Human Risk Factors of Bus Drivers
Authors: Navah Z. Ratzon, Rachel Shichrur
Abstract:
Background: Many studies have assessed and identified the risk factors of safe driving, but there is relatively little research-based evidence concerning the ability to improve the driving skills of drivers in general and in particular of bus drivers, who are defined as a population at risk. Accidents involving bus drivers can endanger dozens of passengers and cause high direct and indirect damages. Objective: To examine the effectiveness of a metacognitive-functional intervention program for the reduction of risk factors among professional drivers relative to a control group. Methods: The study examined 77 bus drivers working for a large public company in the center of the country, aged 27-69. Twenty-one drivers continued to the intervention stage; four of them dropped out before the end of the intervention. The intervention program we developed was based on previous driving models and the guiding occupational therapy practice framework model in Israel, while adjusting the model to the professional driving in public transportation and its particular risk factors. Treatment focused on raising awareness to safe driving risk factors identified at prescreening (ergonomic, perceptual-cognitive and on-road driving data), with reference to the difficulties that the driver raises and providing coping strategies. The intervention has been customized for each driver and included three sessions of two hours. The effectiveness of the intervention was tested using objective measures: In-Vehicle Data Recorders (IVDR) for monitoring natural driving data, traffic accident data before and after the intervention, and subjective measures (occupational performance questionnaire for bus drivers). Results: Statistical analysis found a significant difference between the degree of change in the rate of IVDR perilous events (t(17)=2.14, p=0.046), before and after the intervention. There was significant difference in the number of accidents per year before and after the intervention in the intervention group (t(17)=2.11, p=0.05), but no significant change in the control group. Subjective ratings of the level of performance and of satisfaction with performance improved in all areas tested following the intervention. The change in the ‘human factors/person’ field, was significant (performance : t=- 2.30, p=0.04; satisfaction with performance : t=-3.18, p=0.009). The change in the ‘driving occupation/tasks’ field, was not significant but showed a tendency toward significance (t=-1.94, p=0.07,). No significant differences were found in driving environment-related variables. Conclusions: The metacognitive-functional intervention significantly improved the objective and subjective measures of safety of bus drivers’ driving. These novel results highlight the potential contribution of occupational therapists, using metacognitive functional treatment, to preventing car accidents among the healthy drivers population and improving the well-being of these drivers. This study also enables familiarity with advanced technologies of IVDR systems and enriches the knowledge of occupational therapists in regards to using a wide variety of driving assessment tools and making the best practice decisions.Keywords: bus drivers, IVDR, human risk factors, metacognitive-functional intervention
Procedia PDF Downloads 346205 A Literature Review Evaluating the Use of Online Problem-Based Learning and Case-Based Learning Within Dental Education
Authors: Thomas Turner
Abstract:
Due to the Covid-19 pandemic alternative ways of delivering dental education were required. As a result, many institutions moved teaching online. The impact of this is poorly understood. Is online problem-based learning (PBL) and case-based learning (CBL) effective and is it suitable in the post-pandemic era? PBL and CBL are both types of interactive, group-based learning which are growing in popularity within many dental schools. PBL was first introduced in the 1960’s and can be defined as learning which occurs from collaborative work to resolve a problem. Whereas CBL encourages learning from clinical cases, encourages application of knowledge and helps prepare learners for clinical practice. To evaluate the use of online PBL and CBL. A literature search was conducted using the CINAHL, Embase, PubMed and Web of Science databases. Literature was also identified from reference lists. Studies were only included from dental education. Seven suitable studies were identified. One of the studies found a high learner and facilitator satisfaction rate with online CBL. Interestingly one study found learners preferred CBL over PBL within an online format. A study also found, that within the context of distance learning, learners preferred a hybrid curriculum including PBL over a traditional approach. A further study pointed to the limitations of PBL within an online format, such as reduced interaction, potentially hindering the development of communication skills and the increased time and technology support required. An audience response system was also developed for use within CBL and had a high satisfaction rate. Interestingly one study found achievement of learning outcomes was correlated with the number of student and staff inputs within an online format. Whereas another study found the quantity of learner interactions were important to group performance, however the quantity of facilitator interactions was not. This review identified generally favourable evidence for the benefits of online PBL and CBL. However, there is limited high quality evidence evaluating these teaching methods within dental education and there appears to be limited evidence comparing online and faceto-face versions of these sessions. The importance of the quantity of learner interactions is evident, however the importance of the quantity of facilitator interactions appears to be questionable. An element to this may be down to the quality of interactions, rather than just quantity. Limitations of online learning regarding technological issues and time required for a session are also highlighted, however as learners and facilitators get familiar with online formats, these may become less of an issue. It is also important learners are encouraged to interact and communicate during these sessions, to allow for the development of communication skills. Interestingly CBL appeared to be preferred to PBL in an online format. This may reflect the simpler nature of CBL, however further research is required to explore this finding. Online CBL and PBL appear promising, however further research is required before online formats of these sessions are widely adopted in the post-pandemic era.Keywords: case-based learning, online, problem-based learning, remote, virtual
Procedia PDF Downloads 77204 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies
Authors: Roberta Martino, Viviana Ventre
Abstract:
Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty
Procedia PDF Downloads 129203 Role of ASHA in Utilizing Maternal Health Care Services India, Evidences from National Rural Health Mission (NRHM)
Authors: Dolly Kumari, H. Lhungdim
Abstract:
Maternal health is one of the crucial health indicators for any country. 5th goal of Millennium Development Goals is also emphasising on improvement of maternal health. Soon after Independence government of India realizing the importance of maternal and child health care services, and took steps to strengthen in 1st and 2nd five year plans. In past decade the other health indicator which is life expectancy at birth has been observed remarkable improvement. But still maternal mortality is high in India and in some states it is observe much higher than national average. Government of India pour lots of fund and initiate National Rural Health Mission (NRHM) in 2005 to improve maternal health in country by providing affordable and accessible health care services. Accredited Social Heath Activist (ASHA) is one of the key components of the NRHM. Mainly ASHAs are selected female aged 25-45 years from village itself and accountable for the monitoring of maternal health care for the same village. ASHA are trained to works as an interface between the community and public health system. This study tries to assess the role of ASHA in utilizing maternal health care services and to see the level of awareness about benefits given under JSY scheme and utilization of those benefits by eligible women. For the study concurrent evaluation data from National Rural health Mission (NRHM), initiated by government of India in 2005 has been used. This study is based on 78205 currently married women from 70 different districts of India. Descriptive statistics, chi2 test and binary logistic regression have been used for analysis. The probability of institutional delivery increases by 2.03 times (p<0.001) while if ASHA arranged or helped in arranging transport facility the probability of institutional delivery is increased by 1.67 times (p<0.01) than if she is not arranging transport facility. Further if ASHA facilitated to get JSY card to the pregnant women probability of going for full ANC is increases by 1.36 times (p<0.05) than reference. However if ASHA discuses about institutional delivery and approaches to get register than probability of getting TT injection is 1.88 and 1.64 times (p<0.01) higher than that if she did not discus. Further, Probability of benefits from JSY schemes is 1.25 times (p<0.001) higher among women who get married after 18 years. The probability of benefits from JSY schemes is 1.25 times (p<0.001) higher among women who get married after 18 year of age than before 18 years, it is also 1.28 times (p<0.001) and 1.32 times (p<0.001) higher among women have 1 to 8 year of schooling and with 9 and above years of schooling respectively than the women who never attended school. Those women who are working have 1.13 times (p<0.001) higher probability of getting benefits from JSY scheme than not working women. Surprisingly women belongs to wealthiest quintile are .53times (P<0.001) less aware about JSY scheme. Results conclude that work done by ASHA has great influence on maternal health care utilization in India. But results also show that still substantial numbers of needed population are far from utilization of these services. Place of delivery is significantly influenced by referral and transport facility arranged by ASHA.Keywords: institutional delivery, JSY beneficiaries, referral faculty, public health
Procedia PDF Downloads 330