Search results for: sensitive concept
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5421

Search results for: sensitive concept

1131 A Pink Pill Daily: On the Lust Enhancing Pill for Women and the Medicalization of Sexual Desire

Authors: Maaike Maria Augustina Hommes

Abstract:

This paper reviews the emergence of the recently approved lust enhancing pill for women (sold under the brand name of Addyi) and its status as ‘medicine’ from a cultural studies perspective to understand the way in which the usage of the pill can be seen as a medicalization of sexual desire. It asks where this medicalization can be localized to understand the current placement of and notions on female sexuality. Via a close reading of a woman’s narration of her usage of the pill that appeared in Shape Magazine, this paper critically reviews the pill’s relation to the concept of ‘cure’ and assesses the way this Pink Pill functions as a cure to the DSM-IV based disorder called Hypoactive Sexual Desire Disorder. As such it finds that in the diagnosis with HSDD meant a huge relief. Now this woman was not just ‘bad at life and bad at marriage’ but ‘just had this health issue’. In order to get to an understanding of the different structures that conjoin in this expression of relief this paper reviews the emergence of the sexual desire disorder within psychology and the way that the loss of desire becomes localized in the brain. This localization will be related to two ways of looking at the human body; the medical gaze as described by Michel Foucault, and the neuromolecular gaze, as introduced by Nikolas Rose and Joelle M.Abi-Rached. Both these penetrating gazes bring about a certain reductionism in which the human body is either viewed as an objectified ‘sick body’ or as a set of chemical reactions. By referring to these modes of looking as reductionist one assumes that something is lost, or forgotten in the act of reducing. It is both what is gained in the formulation of the disorder, as what is lost in reduction of the disorder in medical knowledge that is at the central inquiry of this paper. As such, this paper brings forward the way in which medicine and cultural narrative are deeply intertwined. It is this way in which different forces of subject formation come together that is addressed via an interdisciplinary and object-centered focus on the pink pill.

Keywords: disorder and cure, female sexual desire, medical gaze, neuromolecular gaze

Procedia PDF Downloads 245
1130 Detection the Ice Formation Processes Using Multiple High Order Ultrasonic Guided Wave Modes

Authors: Regina Rekuviene, Vykintas Samaitis, Liudas Mažeika, Audrius Jankauskas, Virginija Jankauskaitė, Laura Gegeckienė, Abdolali Sadaghiani, Shaghayegh Saeidiharzand

Abstract:

Icing brings significant damage to aviation and renewable energy installations. Air-conditioning, refrigeration, wind turbine blades, airplane and helicopter blades often suffer from icing phenomena, which cause severe energy losses and impair aerodynamic performance. The icing process is a complex phenomenon with many different causes and types. Icing mechanisms, distributions, and patterns are still relevant to research topics. The adhesion strength between ice and surfaces differs in different icing environments. This makes the task of anti-icing very challenging. The techniques for various icing environments must satisfy different demands and requirements (e.g., efficient, lightweight, low power consumption, low maintenance and manufacturing costs, reliable operation). It is noticeable that most methods are oriented toward a particular sector and adapting them to or suggesting them for other areas is quite problematic. These methods often use various technologies and have different specifications, sometimes with no clear indication of their efficiency. There are two major groups of anti-icing methods: passive and active. Active techniques have high efficiency but, at the same time, quite high energy consumption and require intervention in the structure’s design. It’s noticeable that vast majority of these methods require specific knowledge and personnel skills. The main effect of passive methods (ice-phobic, superhydrophobic surfaces) is to delay ice formation and growth or reduce the adhesion strength between the ice and the surface. These methods are time-consuming and depend on forecasting. They can be applied on small surfaces only for specific targets, and most are non-biodegradable (except for anti-freezing proteins). There is some quite promising information on ultrasonic ice mitigation methods that employ UGW (Ultrasonic Guided Wave). These methods are have the characteristics of low energy consumption, low cost, lightweight, and easy replacement and maintenance. However, fundamental knowledge of ultrasonic de-icing methodology is still limited. The objective of this work was to identify the ice formation processes and its progress by employing ultrasonic guided wave technique. Throughout this research, the universal set-up for acoustic measurement of ice formation in a real condition (temperature range from +240 C to -230 C) was developed. Ultrasonic measurements were performed by using high frequency 5 MHz transducers in a pitch-catch configuration. The selection of wave modes suitable for detection of ice formation phenomenon on copper metal surface was performed. Interaction between the selected wave modes and ice formation processes was investigated. It was found that selected wave modes are sensitive to temperature changes. It was demonstrated that proposed ultrasonic technique could be successfully used for the detection of ice layer formation on a metal surface.

Keywords: ice formation processes, ultrasonic GW, detection of ice formation, ultrasonic testing

Procedia PDF Downloads 39
1129 Prophylactic Replacement of Voice Prosthesis: A Study to Predict Prosthesis Lifetime

Authors: Anne Heirman, Vincent van der Noort, Rob van Son, Marije Petersen, Lisette van der Molen, Gyorgy Halmos, Richard Dirven, Michiel van den Brekel

Abstract:

Objective: Voice prosthesis leakage significantly impacts laryngectomies patients' quality of life, causing insecurity and frequent unplanned hospital visits and costs. In this study, the concept of prophylactic voice prosthesis replacement was explored to prevent leakages. Study Design: A retrospective cohort study. Setting: Tertiary hospital. Methods: Device lifetimes and voice prosthesis replacements of a retrospective cohort, including all patients with laryngectomies between 2000 and 2012 in the Netherlands Cancer Institute, were used to calculate the number of needed voice prostheses per patient per year when preventing 70% of the leakages by prophylactic replacement. Various strategies for the timing of prophylactic replacement were considered: Adaptive strategies based on the individual patient’s history of replacement and fixed strategies based on the results of patients with similar voice prosthesis or treatment characteristics. Results: Patients used a median of 3.4 voice prostheses per year (range 0.1-48.1). We found a high inter-and intrapatient variability in device lifetime. When applying prophylactic replacement, this would become a median of 9.4 voice prostheses per year, which means replacement every 38 days, implying more than six additional voice prostheses per patient per year. The individual adaptive model showed that preventing 70% of the leakages was impossible for most patients, and only a median of 25% can be prevented. Monte-Carlo simulations showed that prophylactic replacement is not feasible due to the high Coefficient of Variation (Standard Deviation/Mean) in device lifetime. Conclusion: Based on our simulations, prophylactic replacement of voice prostheses is not feasible due to high inter-and intrapatient variation in device lifetime.

Keywords: voice prosthesis, voice rehabilitation, total laryngectomy, prosthetic leakage, device lifetime

Procedia PDF Downloads 106
1128 Multisensory Science, Technology, Engineering and Mathematics Learning: Combined Hands-on and Virtual Science for Distance Learners of Food Chemistry

Authors: Paulomi Polly Burey, Mark Lynch

Abstract:

It has been shown that laboratory activities can help cement understanding of theoretical concepts, but it is difficult to deliver such an activity to an online cohort and issues such as occupational health and safety in the students’ learning environment need to be considered. Chemistry, in particular, is one of the sciences where practical experience is beneficial for learning, however typical university experiments may not be suitable for the learning environment of a distance learner. Food provides an ideal medium for demonstrating chemical concepts, and along with a few simple physical and virtual tools provided by educators, analytical chemistry can be experienced by distance learners. Food chemistry experiments were designed to be carried out in a home-based environment that 1) Had sufficient scientific rigour and skill-building to reinforce theoretical concepts; 2) Were safe for use at home by university students and 3) Had the potential to enhance student learning by linking simple hands-on laboratory activities with high-level virtual science. Two main components of the resources were developed, a home laboratory experiment component, and a virtual laboratory component. For the home laboratory component, students were provided with laboratory kits, as well as a list of supplementary inexpensive chemical items that they could purchase from hardware stores and supermarkets. The experiments used were typical proximate analyses of food, as well as experiments focused on techniques such as spectrophotometry and chromatography. Written instructions for each experiment coupled with video laboratory demonstrations were used to train students on appropriate laboratory technique. Data that students collected in their home laboratory environment was collated across the class through shared documents, so that the group could carry out statistical analysis and experience a full laboratory experience from their own home. For the virtual laboratory component, students were able to view a laboratory safety induction and advised on good characteristics of a home laboratory space prior to carrying out their experiments. Following on from this activity, students observed laboratory demonstrations of the experimental series they would carry out in their learning environment. Finally, students were embedded in a virtual laboratory environment to experience complex chemical analyses with equipment that would be too costly and sensitive to be housed in their learning environment. To investigate the impact of the intervention, students were surveyed before and after the laboratory series to evaluate engagement and satisfaction with the course. Students were also assessed on their understanding of theoretical chemical concepts before and after the laboratory series to determine the impact on their learning. At the end of the intervention, focus groups were run to determine which aspects helped and hindered learning. It was found that the physical experiments helped students to understand laboratory technique, as well as methodology interpretation, particularly if they had not been in such a laboratory environment before. The virtual learning environment aided learning as it could be utilized for longer than a typical physical laboratory class, thus allowing further time on understanding techniques.

Keywords: chemistry, food science, future pedagogy, STEM education

Procedia PDF Downloads 147
1127 The Implication of Disaster Risk Identification to Cultural Heritage-The Scenarios of Flood Risk in Taiwan

Authors: Jieh-Jiuh Wang

Abstract:

Disasters happen frequently due to the global climate changes today. The cultural heritage conservation should be considered from the perspectives of surrounding environments and large-scale disasters. Most current thoughts about the disaster prevention of cultural heritages in Taiwan are single-point thoughts emphasizing firefighting, decay prevention, and construction reinforcement and ignoring the whole concept of the environment. The traditional conservation cannot defend against more and more tremendous and frequent natural disasters caused by climate changes. More and more cultural heritages are confronting the high risk of disasters. This study adopts the perspective of risk identification and takes flood as the main disaster category. It analyzes the amount and categories of cultural heritages that might suffer from disasters with the geographic information system integrating the latest flooding potential data from National Fire Agency and Water Resources Agency and the basic data of cultural heritages. It examines the actual risk of cultural heritages confronting floods and serves as the accordance for future considerations of risk measures and preparation for reducing disasters. The result of the study finds the positive relationship between the disaster affected situation of national cultural heritages and the rainfall intensity. The order of impacted level by floods is historical buildings, historical sites indicated by municipalities and counties, and national historical sites and relics. However, traditional settlements and cultural landscapes are not impacted. It might be related to the taboo space in the traditional culture of site selection (concepts of disaster avoidance). As for the regional distribution on the other hand, cultural heritages in central and northern Taiwan suffer from more shocking floods, while the heritages in northern and eastern Taiwan suffer from more serious flooding depth.

Keywords: cultural heritage, flood, preventive conservation, risk management

Procedia PDF Downloads 314
1126 Solar-Plasma Reactors for a Zero-Emission Economy

Authors: Dassou Nagassou

Abstract:

Recent increase in frequency and severity of climatic impacts throughout the world has put a particular emphasis on the urgency to address the anthropogenic greenhouse gas emissions. The latter, mainly composed of carbon dioxide are responsible for the global warming of planet earth. Despite efforts to transition towards a zero-emission economy, manufacturing industries, electricity generation power plants, and transportation sectors continue to encounter challenges which hinder their progress towards a full decarbonization. The growing energy demand from both developed and under-developed economies exacerbates the situation and as a result, more carbon dioxide is discharged into the atmosphere. This situation imposes a lot of constraints on industries which are involved i.e., manufacturing industries, transportation, and electricity generation which must navigate the stringent environmental regulations in order to remain profitable. Existing solutions such as energy efficiencies, green materials (life cycle analysis), and many more have fallen short to address the problem due to their inadaptation to existing infrastructures, low efficiencies, and prohibitive costs. The proposed technology exploits the synergistic interaction between solar radiation and plasma to boost a direct decomposition of the molecules of carbon dioxide while producing alternative fuels which can be used to sustain on-site high-temperature processes via 100% solar energy harvesting in the form of photons and electricity. The advantages of this technology and its ability to be easily integrated into existing systems make it appealing for the industry which can now afford to fast track on the path towards full decarbonization, thanks to the solar plasma reactor. Despite the promising experimental results which proved the viability of this concept, solar-plasma reactors require further investigations to understand the synergistic interactions between plasma and solar radiation for a potential technology scale-up.

Keywords: solar, non-equilibrium, plasma, reactor, greenhouse-gases, solar-fuels

Procedia PDF Downloads 38
1125 Study on Seismic Performance of Reinforced Soil Walls in Order to Offer Modified Pseudo Static Method

Authors: Majid Yazdandoust

Abstract:

This study, tries to suggest a design method based on displacement using finite difference numerical modeling in reinforcing soil retaining wall with steel strip. In this case, dynamic loading characteristics such as duration, frequency, peak ground acceleration, geometrical characteristics of reinforced soil structure and type of the site are considered to correct the pseudo static method and finally introduce the pseudo static coefficient as a function of seismic performance level and peak ground acceleration. For this purpose, the influence of dynamic loading characteristics, reinforcement length, height of reinforced system and type of the site are investigated on seismic behavior of reinforcing soil retaining wall with steel strip. Numerical results illustrate that the seismic response of this type of wall is highly dependent to cumulative absolute velocity, maximum acceleration, and height and reinforcement length so that the reinforcement length can be introduced as the main factor in shape of failure. Considering the loading parameters, mechanically stabilized earth wall parameters and type of the site showed that the used method in this study leads to most efficient designs in comparison with other methods which are generally suggested in cods that are usually based on limit-equilibrium concept. The outputs show the over-estimation of equilibrium design methods in comparison with proposed displacement based methods here.

Keywords: pseudo static coefficient, seismic performance design, numerical modeling, steel strip reinforcement, retaining walls, cumulative absolute velocity, failure shape

Procedia PDF Downloads 462
1124 Personal Data Protection: A Legal Framework for Health Law in Turkey

Authors: Veli Durmus, Mert Uydaci

Abstract:

Every patient who needs to get a medical treatment should share health-related personal data with healthcare providers. Therefore, personal health data plays an important role to make health decisions and identify health threats during every encounter between a patient and caregivers. In other words, health data can be defined as privacy and sensitive information which is protected by various health laws and regulations. In many cases, the data are an outcome of the confidential relationship between patients and their healthcare providers. Globally, almost all nations have own laws, regulations or rules in order to protect personal data. There is a variety of instruments that allow authorities to use the health data or to set the barriers data sharing across international borders. For instance, Directive 95/46/EC of the European Union (EU) (also known as EU Data Protection Directive) establishes harmonized rules in European borders. In addition, the General Data Protection Regulation (GDPR) will set further common principles in 2018. Because of close policy relationship with EU, this study provides not only information on regulations, directives but also how they play a role during the legislative process in Turkey. Even if the decision is controversial, the Board has recently stated that private or public healthcare institutions are responsible for the patient call system, for doctors to call people waiting outside a consultation room, to prevent unlawful processing of personal data and unlawful access to personal data during the treatment. In Turkey, vast majority private and public health organizations provide a service that ensures personal data (i.e. patient’s name and ID number) to call the patient. According to the Board’s decision, hospital or other healthcare institutions are obliged to take all necessary administrative precautions and provide technical support to protect patient privacy. However, this application does not effectively and efficiently performing in most health services. For this reason, it is important to draw a legal framework of personal health data by stating what is the main purpose of this regulation and how to deal with complicated issues on personal health data in Turkey. The research is descriptive on data protection law for health care setting in Turkey. Primary as well as secondary data has been used for the study. The primary data includes the information collected under current national and international regulations or law. Secondary data include publications, books, journals, empirical legal studies. Consequently, privacy and data protection regimes in health law show there are some obligations, principles and procedures which shall be binding upon natural or legal persons who process health-related personal data. A comparative approach presents there are significant differences in some EU member states due to different legal competencies, policies, and cultural factors. This selected study provides theoretical and practitioner implications by highlighting the need to illustrate the relationship between privacy and confidentiality in Personal Data Protection in Health Law. Furthermore, this paper would help to define the legal framework for the health law case studies on data protection and privacy.

Keywords: data protection, personal data, privacy, healthcare, health law

Procedia PDF Downloads 189
1123 Engine with Dual Helical Crankshaft System Operating at an Overdrive Gear Ratio

Authors: Anierudh Vishwanathan

Abstract:

This paper suggests a new design of the crankshaft system that would help to use a low revving engine for applications requiring the use of a high revving engine operating at the same power by converting the extra or unnecessary torque obtained from a low revving engine into angular velocity of the crankshaft of the engine hence, improve the fuel economy of the vehicle because of the fact that low revving engines run more effectively on lean air fuel mixtures accompanied with less wear and tear of the engine due to lesser rubbing of the piston rings with the cylinder walls. If the crankshaft with the proposed design is used in a low revving engine, then it will give the same torque and speed as that given by a high revving engine operating at the same power but the new engine will give better fuel economy. Hence the new engine will give the benefits of a low revving engine as well as a high revving engine. The proposed crankshaft design will be achieved by changing the design of the crankweb in such a way that it functions both as a counterweight as well as a helical gear that can transfer power to the secondary gear shaft which will be incorporated in the crankshaft system. The crankshaft and the secondary gear shaft will be operating at an overdrive ratio. The crankshaft will now be a two shaft system instead of a single shaft system. The newly designed crankshaft will be mounted on the bearings instead of being connected to the flywheel of the engine. This newly designed crankshaft will transmit power to the secondary shaft which will rotate the flywheel and then the rotary motion will be transmitted to the transmission system as usual. In this design, the concept of power transmission will be incorporated in the crankshaft system. In the paper, the crankshaft and the secondary shafts have been designed in such a way that at any instant of time only half the number of crankwebs will be meshed with the secondary shaft. For example, during one revolution of the crankshaft, if for the first half of revolution; first, second, seventh and eighth crankwebs are meshing with the secondary shaft then for the next half revolution, third, fourth, fifth and sixth crankwebs will mesh with the secondary shaft. This paper also analyses the proposed crankshaft design for safety against fatigue failure. Finite element analysis of the crankshaft has been done and the resultant stresses have been calculated.

Keywords: low revving, high revving, secondary shaft, partial meshing

Procedia PDF Downloads 249
1122 Occupational Exposure to Electromagnetic Fields Can Increase the Release of Mercury from Dental Amalgam Fillings

Authors: Ghazal Mortazavi, S. M. J. Mortazavi

Abstract:

Electricians, power line engineers and power station workers, welders, aluminum reduction workers, MRI operators and railway workers are occupationally exposed to different levels of electromagnetic fields. Mercury is among the most toxic metals. Dental amalgam fillings cause significant exposure to elemental mercury vapour in the general population. Today, substantial evidence indicates that mercury even at low doses may lead to toxicity. Increased release of mercury from dental amalgam fillings after exposure to MRI or microwave radiation emitted by mobile phones has been previously shown by our team. Moreover, our recent studies on the effects of stronger magnetic fields entirely confirmed our previous findings. From the other point of view, we have also shown that papers which reported no increased release of mercury after MRI, may have some methodological flaws. Over the past several years, our lab has focused on the health effects of exposure of laboratory animals and humans to different sources of electromagnetic fields such as mobile phones and their base stations, mobile phone jammers, laptop computers, radars, dentistry cavitrons, and MRI. As a strong association between exposure to electromagnetic fields and mercury level has been found in our studies, our findings lead us to this conclusion that occupational exposure to electromagnetic fields in workers with dental amalgam fillings can lead to elevated levels of mercury. Studies which reported that exposure to mercury can be a risk factor of Alzheimer’s disease (AD) due to the accumulation of amyloid beta protein (Aβ) in the brain and those reported that long-term occupational exposure to high levels of electromagnetic fields can increase the risk of Alzheimer's disease and dementia in male workers support our concept and confirm the significant role of the occupational exposure to electromagnetic fields in increasing the mercury level in workers with amalgam fillings.

Keywords: occupational exposure, electromagnetic fields, workers, mercury release, dental amalgam, restorative dentistry

Procedia PDF Downloads 395
1121 Hierarchical Manganese and Nickel Selenide based Ultra-efficient Electrode Material for All-Solid-State Asymmetric Supercapacitors with Extended Energy Efficacy

Authors: Siddhant Srivastav, Soumyaranjan Mishra, Sumanta Kumar Meher

Abstract:

Researchers are attempting to develop extremely efficient electrochemical energy storage technologies as a result of the phenomenal advancement of portable electronic devices. Because of their improved electrical conductivity and narrower band gap, transition metal selenide-based nanostructures have piqued the interest of many researchers in this field. Based on this concept, we present a simple anion exchange hydrothermal synthesis method for synthesizing manganese and nickel based selenide (Mn/NiSe2) nanostructure for use in all-solid-state asymmetric supercapacitors. According to the comprehensive physicochemical characterizations, the material has lowly crystalline properties, a distinct porous microstructure, and a significant bonding contact between the metal and the selenium. The electrochemical investigations of the Mn/NiSe2 electrode material revealed supercapacitive charge discharge properties, excellent electro-kinetic reversibility, and minimal charge transfer resistance (Rct). Furthermore, the all-solid-state asymmetric supercapacitor device assembled using Mn/NiSe2 as positive electrode, nitrogen doped reduced graphene oxide (N-rGO) as negative electrode, and PVA-KOH gel as electrolyte/separator exhibit good redox behaviour, excellent charge-discharge properties with negligible voltage (IR) drop, and lower impedance characteristics. The solid state asymmetric supercapacitor device (Mn/NiSe2||N-rGO) demonstrated the power density of ultra-capacitors and the energy density of rechargeable batteries. Conclusively, the Mn/NiSe2 has been proposed as a potential outstanding electrode material for the next generation of all-solid-state asymmetric supercapacitors.

Keywords: anion exchange, asymmetric supercapacitor, supercapacitive charge-discharge, voltage drop

Procedia PDF Downloads 77
1120 Modelling Affordable Waste Management Solutions for India

Authors: Pradip Baishya, D. K. Mahanta

Abstract:

Rapid and unplanned urbanisation in most cities of India has progressively increased the problem of managing municipal waste in the past few years. With insufficient infrastructure and funds, Municipalities in most cities are struggling to cope with the pace of waste generated. Open dumping is widely in practice as a cheaper option. Scientific disposal of waste in such a large scale with the elements of segregation, recycling, landfill, and incineration involves sophisticated and expensive plants. In an effort to finding affordable and simple solutions to address this burning issue of waste disposal, a semi-mechanized plant has been designed underlying the concept of a zero waste community. The fabrication work of the waste management unit is carried out by local skills from locally available materials. A resident colony in the city of Guwahati has been chosen, which is seen as a typical representative of most cities in India in terms of size and key issues surrounding waste management. Scientific management and disposal of waste on site is carried out on the principle of reduce, reuse and recycle from segregation to compositing. It is a local community participatory model, which involves all stakeholders in the process namely rag pickers, residents, municipality and local industry. Studies were conducted to testify the plant as revenue earning self-sustaining model in the long term. Current working efficiency of plant for segregation was found to be 1kg per minute. Identifying bottlenecks in the success of the model, data on efficiency of the plant, economics of its fabrication were part of the study. Similar satellite waste management plants could potentially be a solution to supplement the waste management system of municipalities of similar sized cities in India or South East Asia with similar issues surrounding waste disposal.

Keywords: affordable, rag pickers, recycle, reduce, reuse, segregation, zero waste

Procedia PDF Downloads 287
1119 Curcumin and Its Analogues: Potent Natural Antibacterial Compounds against Staphylococcus aureus

Authors: Prince Kumar, Shamseer Kulangara Kandi, Diwan S. Rawat, Kasturi Mukhopadhyay

Abstract:

Staphylococcus aureus is the most pathogenic of all staphylococci, a major cause of nosocomial infections, and known for acquiring resistance towards various commonly used antibiotics. Due to the widespread use of synthetic drugs, clinicians are now facing a serious threat in healthcare. The increasing resistance in staphylococci has created a need for alternatives to these synthetic drugs. One of the alternatives is a natural plant-based medicine for both disease prevention as well as the treatment of chronic diseases. Among such natural compounds, curcumin is one of the most studied molecules and has been an integral part of traditional medicines and Ayurveda from ancient times. It is a natural polyphenolic compound with diverse pharmacological effects, including anti-inflammatory, antioxidant, anti-cancerous and antibacterial activities. In spite of its efficacy and potential, curcumin has not been approved as a therapeutic agent yet, because of its low solubility, low bioavailability, and rapid metabolism in vivo. The presence of central β-diketone moiety in curcumin is responsible for its rapid metabolism. To overcome this, in the present study, curcuminoids were designed by modifying the central β-diketone moiety of curcumin into mono carbonyl moiety and their antibacterial potency against S. aureus ATCC 29213 was determined. Further, the mode of action and hemolytic activity of the most potent curcuminoids were studied. Minimum inhibitory concentration (MIC) and in vitro killing kinetics were used to study the antibacterial activity of the designed curcuminoids. For hemolytic assay, mouse Red blood cells were incubated with curcuminoids and hemoglobin release was measured spectrophotometrically. The mode of action of curcuminoids was analysed by membrane depolarization assay using membrane potential sensitive dye 3,3’-dipropylthiacarbocyanine iodide (DiSC3(5)) through spectrofluorimetry and membrane permeabilization assay using calcein-AM through flow cytometry. Antibacterial screening of the designed library (61 curcuminoids) revealed excellent in vitro potency of six compounds against S. aureus (MIC 8 to 32 µg/ml). Moreover, these six compounds were found to be non-hemolytic up to 225 µg/ml that is much higher than their corresponding MIC values. The in vitro killing kinetics data showed five of these lead compounds to be bactericidal causing >3 log reduction in the viable cell count within 4 hrs at 5 × MIC while the sixth compound was found to be bacteriostatic. Depolarization assay revealed that all the six curcuminoids caused depolarization in their corresponding MIC range. Further, the membrane permeabilization assay showed that all the six curcuminoids caused permeabilization at 5 × MIC in 2 hrs. This membrane depolarization and permeabilization caused by curcuminoids found to be in correlation with their corresponding killing efficacy. Both these assays point out that membrane perturbations might be a primary mode of action for these curcuminoids. Overall, the present study leads us six water soluble, non-hemolytic, membrane-active curcuminoids and provided an impetus for further research on therapeutic use of these lead curcuminoids against S. aureus.

Keywords: antibacterial, curcumin, minimum inhibitory concentration , Staphylococcus aureus

Procedia PDF Downloads 147
1118 Measurement and Monitoring of Graduate Attributes via iCGPA Implementation and ACADEMIA Programming: UNIMAS Case Study

Authors: Shanti Faridah Salleh, Azzahrah Anuar, Hamimah Ujir, Rohana Sapawi, Wan Hashim Wan Ibrahim, Noraziah Abdul Wahab, Majina Sulaiman, Raudhah Ahmadi, Al-Khalid Othman, Johari Abdullah

Abstract:

Integrated Cumulative Grade Point Average or iCGPA is an evaluation and reporting system that represents a comprehensive development of students’ achievement in their academic programs. Universiti Malaysia Sarawak, UNIMAS has started its implementation of iCGPA in 2016. iCGPA is driven by the Outcome-Based Education (OBE) system that has been long integrated into the higher education in Malaysia. iCGPA is not only a tool to enhance the OBE concept through constructive alignment but it is also an integrated mechanism to assist various stakeholders in making decisions or planning for program improvement. The outcome of this integrated system is the reporting of students’ academic performance in terms of cognitive (knowledge), psychomotor (skills), and affective (attitude) of which the students acquire throughout the duration of their study. The iCGPA reporting illustrates the attainment of student’s attribute in the eight domains of learning outcomes listed in the Malaysian Qualifications Framework (MQF). This paper discusses on the implementation of iCGPA in UNIMAS on the policy and strategy to direct the whole university to implement the iCGPA. The steps and challenges in integrating the exsting Outcome-Based Education and utilising iCGPA as a tool to quantify the students’ achievement are also highlighted in this paper. Finally, the ACADEMIA system, which is a dedicated centralised program ensure the implementation of iCGPA is a success has been developed. This paper discusses the structure and the analysis of ACADEMIA program and concludes the analysis made on the improvement made on the implementation of constructive alignment in all 40 programs involves in iCGPA implementation.

Keywords: constructive alignment, holistic graduates, mapping of assessment, programme outcome

Procedia PDF Downloads 185
1117 The Conceptualization of Patient-Centered Care in Latin America: A Scoping Review

Authors: Anne Klimesch, Alejandra Martinez, Martin HäRter, Isabelle Scholl, Paulina Bravo

Abstract:

Patient-centered care (PCC) is a key principle of high-quality healthcare. In Latin America, research on and promotion of PCC have taken place in the past. However, thorough implementation of PCC in practice is still missing. In Germany, an integrative model of patient-centeredness has been developed by synthesis of diverse concepts of PCC. The model could serve as a point of reference for further research on the implementation of PCC. However, it is predominantly based on research from Europe and North America. This scoping review, therefore, aims to accumulate research on PCC in Latin America in the past 15 years and analyse how PCC has been conceptualized. The resulting overview of PCC in Latin America will be a foundation for a subsequent study aiming at the adaptation of the integrative model of patient-centeredness to the Latin American health care context. Scientific databases (MEDLINE, EMBASE, PsycINFO, CINAHL, Scopus, Web of Science, SCIELO, Redalyc.) will be searched, and reference and citation tracking will be performed. Studies will be included if they were carried out in Latin America, investigated PCC in any clinical and community setting (public and private), and were published in English, Spanish, French, or Portuguese since 2006. Furthermore, any theoretical framework or conceptual model to guide how PCC is conceptualized in Latin America will be included. Two reviewers will be responsible for the identification of articles, screening of records, and full-text assessment. The results of the scoping review will be used in the development of a mixed-methods study with the aim to understand the needs for PCC, as well as barriers and facilitators in Latin America. Based on the outcomes, the integrative model of PCC will be translated to Spanish and adapted to the Latin American context. The integrative model will enable the dissemination of the concept of PCC in Latin America and will provide a common ground for further research on the topic. The project will thereby make an important contribution to an evidence-based implementation of PCC in Latin America.

Keywords: conceptual framework, integrative model of PCC, Latin America, patient-centered care

Procedia PDF Downloads 169
1116 Minimum Wages and Its Impact on Agriculture and Non Agricultural Sectors with Special Reference to Recent Labour Reforms in India

Authors: Bikash Kumar Malick

Abstract:

Labour reform is a most celebrated theme for policy makers, at the same time it is also a most misunderstood and skeptical concept even for the educated masses in India. One of the widely focused and discussed topics which needs an in-depth examination is India’s labour laws. It may actually help to reach points to understand the exact requirements in labour reforms by making the labour laws more simple and concise in form and its implementation. It is also a requirement to guide states in India in terms of making laws on it as Indian Constitution itself is federal in form and unitary in spirit. Recently, Codes of Wages Bill has been introduced in Indian Parliament while other three codes are waiting to come in the same line and those codes actually highlight the simplified features of labour laws to enable labour reform in a succinct manner. However, it still brings more confusion in minds of people. To wipe out the confusion and to bring a note and to put it for correlation among the labour reforms of both centre and states which both generates employment and make growth sustainable in India providing clear public understanding. This time is also ripe minimizing the apprehension about all the coming labour laws simplified in different codes in India. This article attempts to highlight the need of labour reform and its possible impact. It also examines the higher rates of minimum wages and its links with its coverage agriculture and nonagricultural sectors (including mines) over the period time. It also takes into consideration of central sphere and in states sphere minimum wage which are linked with Consumer Price Index to bring into account the living standard of workers and to examine the cause and effect between minimum wage and output in both agriculture and non agricultural sector with regression analysis. Increase in minimum wage has actually strengthened the sustainable output.

Keywords: codes of wages, indian constitution, minimum wage, labour laws, labour reforms

Procedia PDF Downloads 173
1115 Sequence Component-Based Adaptive Protection for Microgrids Connected Power Systems

Authors: Isabelle Snyder

Abstract:

Microgrid protection presents challenges to conventional protection techniques due to the low induced fault current. Protection relays present in microgrid applications require a combination of settings groups to adjust based on the architecture of the microgrid in islanded and grid-connected mode. In a radial system where the microgrid is at the other end of the feeder, directional elements can be used to identify the direction of the fault current and switch settings groups accordingly (grid connected or microgrid connected). However, with multiple microgrid connections, this concept becomes more challenging, and the direction of the current alone is not sufficient to identify the source of the fault current contribution. ORNL has previously developed adaptive relaying schemes through other DOE-funded research projects that will be evaluated and used as a baseline for this research. The four protection techniques in this study are the following: (1) Adaptive Current only Protection System (ACPS), Intentional (2) Unbalanced Control for Protection Control (IUCPC), (3) Adaptive Protection System with Communication Controller (APSCC) (4) Adaptive Model-Driven Protective Relay (AMDPR). The first two methods focus on identifying the islanded mode without communication by monitoring the current sequence component generated by the system (ACPS) or induced with inverter control during islanded mode (IUCPC) to identify the islanding condition without communication at the relay to adjust the settings. These two methods are used as a backup to the APSCC, which relies on a communication network to communicate the islanded configuration to the system components. The fourth method relies on a short circuit model inside the relay that is used in conjunction with communication to adjust the system configuration and computes the fault current and adjusts the settings accordingly.

Keywords: adaptive relaying, microgrid protection, sequence components, islanding detection, communication controlled protection, integrated short circuit model

Procedia PDF Downloads 62
1114 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors

Authors: Navid Kaboudi, Ali Shayanfar

Abstract:

Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.

Keywords: logistic regression, breastfeeding, descriptors, penetration

Procedia PDF Downloads 44
1113 Hermeneutics: Comparative Study of Shri Guru Granth Sahib's Schools of Interpretation

Authors: Amandeep Kaur

Abstract:

All religions enlighten truth which provides spiritual tranquility. But, the language of these holy books is not easy to understand because these have divine language. That's why hermeneutical Study is necessary to understand these Scriptures. There is a separate theoretical framework to study all the disciplines of language, literature, religion etc. Similarly the discipline of interpretation has its own theoretical framework known as hermeneutics. It is a science of interpretation that put forward the best ways and methods of interpretation. But in this modern world, hermeneutics is considered as a theoretical-cum-philosophical discipline. It is vast study of understanding texts. Hermeneutics is especially related to the study of religious scriptures like the Bible, the Qur'an, the Vedas and the Shri Guru Granth Sahib and many more. It is mainly the Western concept which has a great old tradition because it used the Bible as the foremost holy scripture for definition and interpretation. The Discipline of the Indian hermeneutics was led by Mimamsa School. The reference of the word hermeneutics in works of Ancient Greek philosophers indicates towards the antiquity of this word. Shri Guru Granth Sahib's schools of interpretation like Udasi, Nirmala, Sevapanthi and Gyani came into existence to interpret the discourse of Shri Guru Granth Sahib. These are sects of Sikhism and have great contribution to interpret and preach Guru Granth Sahib's revelation. This research paper will represent the comparative study of these sects methods, tools and styles of interpreting the meaning of this holy book. Interpretation is basically textual based process. So, all these schools have chosen Guru Granth Sahib for textual study. Some of the schools have done a whole interpretation of Guru Granth Sahib. But, some of them have done only interpretation of prominent banies i.e Japuji Sahib, Anand Sahib, Assa-di-war etc. This study will also throw lights on sect's historical background and contribution. At last conclusion of this paper is that all the schools have done gurbani interpretation according to their own philosophical and theological point of view. These schools have many similarities and differences among their way of interpretation. It will be discussed briefly.

Keywords: Gyani, hermeneutics, Mimamsa, Nirmala, Sevapanthi, Udasi

Procedia PDF Downloads 157
1112 Black-Hole Dimension: A Distinct Methodology of Understanding Time, Space and Data in Architecture

Authors: Alp Arda

Abstract:

Inspired by Nolan's ‘Interstellar’, this paper delves into speculative architecture, asking, ‘What if an architect could traverse time to study a city?’ It unveils the ‘Black-Hole Dimension,’ a groundbreaking concept that redefines urban identities beyond traditional boundaries. Moving past linear time narratives, this approach draws from the gravitational dynamics of black holes to enrich our understanding of urban and architectural progress. By envisioning cities and structures as influenced by black hole-like forces, it enables an in-depth examination of their evolution through time and space. The Black-Hole Dimension promotes a temporal exploration of architecture, treating spaces as narratives of their current state interwoven with historical layers. It advocates for viewing architectural development as a continuous, interconnected journey molded by cultural, economic, and technological shifts. This approach not only deepens our understanding of urban evolution but also empowers architects and urban planners to create designs that are both adaptable and resilient. Echoing themes from popular culture and science fiction, this methodology integrates the captivating dynamics of time and space into architectural analysis, challenging established design conventions. The Black-Hole Dimension champions a philosophy that welcomes unpredictability and complexity, thereby fostering innovation in design. In essence, the Black-Hole Dimension revolutionizes architectural thought by emphasizing space-time as a fundamental dimension. It reimagines our built environments as vibrant, evolving entities shaped by the relentless forces of time, space, and data. This groundbreaking approach heralds a future in architecture where the complexity of reality is acknowledged and embraced, leading to the creation of spaces that are both responsive to their temporal context and resilient against the unfolding tapestry of time.

Keywords: black-hole, timeline, urbanism, space and time, speculative architecture

Procedia PDF Downloads 33
1111 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining

Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj

Abstract:

Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.

Keywords: data mining, SME growth, success factors, web mining

Procedia PDF Downloads 241
1110 Identification of Body Fluid at the Crime Scene by DNA Methylation Markers for Use in Forensic Science

Authors: Shirin jalili, Hadi Shirzad, Mahasti Modarresi, Samaneh Nabavi, Somayeh Khanjani

Abstract:

Identifying the source tissue of biological material found at crime scenes can be very informative in a number of cases. Despite their usefulness, current visual, catalytic, enzymatic, and immunologic tests for presumptive and confirmatory tissue identification are applicable only to a subset of samples, might suffer limitations such as low specificity, lack of sensitivity, and are substantially impacted by environmental insults. In addition their results are operator-dependent. Recently the possibility of discriminating body fluids using mRNA expression differences in tissues has been described but lack of long term stability of that Molecule and the need to normalize samples for each individual are limiting factors. The use of DNA should solve these issues because of its long term stability and specificity to each body fluid. Cells in the human body have a unique epigenome, which includes differences in DNA methylation in the promoter of genes. DNA methylation, which occurs at the 5′-position of the cytosine in CpG dinucleotides, has great potential for forensic identification of body fluids, because tissue-specific patterns of DNA methylation have been demonstrated, and DNA is less prone to degradation than proteins or RNA. Previous studies have reported several body fluid-specific DNA methylation markers.The presence or absence of a methyl group on the 5’ carbon of the cytosine pyridine ring in CpG dinucleotide regions called ‘CpG islands’ dictates whether the gene is expressed or silenced in the particular body fluid. Were described methylation patterns at tissue specific differentially methylated regions (tDMRs) to be stable and specific, making them excellent markers for tissue identification. The results demonstrate that methylation-based tissue identification is more than a proof-of-concept. The methodology holds promise as another viable forensic DNA analysis tool for characterization of biological materials.

Keywords: DNA methylation, forensic science, epigenome, tDMRs

Procedia PDF Downloads 404
1109 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures

Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani

Abstract:

Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.

Keywords: semantic search engine, Google indexing, query expansion, similarity measures

Procedia PDF Downloads 403
1108 Comparison of Non-destructive Devices to Quantify the Moisture Content of Bio-Based Insulation Materials on Construction Sites

Authors: Léa Caban, Lucile Soudani, Julien Berger, Armelle Nouviaire, Emilio Bastidas-Arteaga

Abstract:

Improvement of the thermal performance of buildings is a high concern for the construction industry. With the increase in environmental issues, new types of construction materials are being developed. These include bio-based insulation materials. They capture carbon dioxide, can be produced locally, and have good thermal performance. However, their behavior with respect to moisture transfer is still facing some issues. With a high porosity, the mass transfer is more important in those materials than in mineral insulation ones. Therefore, they can be more sensitive to moisture disorders such as mold growth, condensation risks or decrease of the wall energy efficiency. For this reason, the initial moisture content on the construction site is a piece of crucial knowledge. Measuring moisture content in a laboratory is a mastered task. Diverse methods exist but the easiest and the reference one is gravimetric. A material is weighed dry and wet, and its moisture content is mathematically deduced. Non-destructive methods (NDT) are promising tools to determine in an easy and fast way the moisture content in a laboratory or on construction sites. However, the quality and reliability of the measures are influenced by several factors. Classical NDT portable devices usable on-site measure the capacity or the resistivity of materials. Water’s electrical properties are very different from those of construction materials, which is why the water content can be deduced from these measurements. However, most moisture meters are made to measure wooden materials, and some of them can be adapted for construction materials with calibration curves. Anyway, these devices are almost never calibrated for insulation materials. The main objective of this study is to determine the reliability of moisture meters in the measurement of biobased insulation materials. The determination of which one of the capacitive or resistive methods is the most accurate and which device gives the best result is made. Several biobased insulation materials are tested. Recycled cotton, two types of wood fibers of different densities (53 and 158 kg/m3) and a mix of linen, cotton, and hemp. It seems important to assess the behavior of a mineral material, so glass wool is also measured. An experimental campaign is performed in a laboratory. A gravimetric measurement of the materials is carried out for every level of moisture content. These levels are set using a climatic chamber and by setting the relative humidity level for a constant temperature. The mass-based moisture contents measured are considered as references values, and the results given by moisture meters are compared to them. A complete analysis of the uncertainty measurement is also done. These results are used to analyze the reliability of moisture meters depending on the materials and their water content. This makes it possible to determine whether the moisture meters are reliable, and which one is the most accurate. It will then be used for future measurements on construction sites to assess the initial hygrothermal state of insulation materials, on both new-build and renovation projects.

Keywords: capacitance method, electrical resistance method, insulation materials, moisture transfer, non-destructive testing

Procedia PDF Downloads 82
1107 Transforming Educational Leadership With Innovative Administrative Strategies

Authors: Kofi Nkonkonya Mpuangnan, Samantha Govender, Hlengiwe Romualda Mhlongo

Abstract:

Educational leaders are skilled architects crafting a vibrant environment where growth, creativity, and adaptability can flourish within schools. Their journey is one of transformation, urging them to explore administrative strategies that align seamlessly with evolving educational models and cater to the specific needs of students, educators, and stakeholders. Through this committed effort to innovate, they seek to enhance the effectiveness and influence of educational systems, paving the way for a more inclusive and forward-thinking educational environment. In this context, the authors explored the concept of transforming educational leadership with administrative strategies in alignment with the following research objectives. To find the strategies that can be adopted by transformation leaders to promote effective administrative practices in an educational setting and to explore the roles of educational leaders in promoting collaboration in education. To find answers to these questions, a systematic literature review underpinned by the transformational leadership model was adopted. Therefore, concepts integrated from a variety of outlets, including academic journals, conference proceedings, and reports found within SCOPUS, WoS, and IBSS databases. A search was aided using specific themes like innovative administrative practices, the roles of educational leaders, and interdisciplinary approaches to administrative practices. The process of conducting the search adhered to the five-step framework, which was subjected to inclusion and exclusion of studies. It was found that transformational leadership, agile methodologies, employee wellbeing, seminars and workshops could foster a culture of innovation and creativity among teachers and staff to transform administrative practices in education settings. It was recommended that professional development programs be organized periodically for educational leaders in educational institutions to help them revitalize their knowledge and skills in educational administration.

Keywords: educational leadership, innovative strategies, administrative practices, professional development, stakeholder engaement, student outcome

Procedia PDF Downloads 52
1106 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 132
1105 Privacy Paradox and the Internet of Medical Things

Authors: Isabell Koinig, Sandra Diehl

Abstract:

In recent years, the health-care context has not been left unaffected by technological developments. In recent years, the Internet of Medical Things (IoMT)has not only led to a collaboration between disease management and advanced care coordination but also to more personalized health care and patient empowerment. With more than 40 % of all health technology being IoMT-related by 2020, questions regarding privacy become more prevalent, even more so during COVID-19when apps allowing for an intensive tracking of people’s whereabouts and their personal contacts cause privacy advocates to protest and revolt. There is a widespread tendency that even though users may express concerns and fears about their privacy, they behave in a manner that appears to contradict their statements by disclosing personal data. In literature, this phenomenon is discussed as a privacy paradox. While there are some studies investigating the privacy paradox in general, there is only scarce research related to the privacy paradox in the health sector and, to the authors’ knowledge, no empirical study investigating young people’s attitudes toward data security when using wearables and health apps. The empirical study presented in this paper tries to reduce this research gap by focusing on the area of digital and mobile health. It sets out to investigate the degree of importance individuals attribute to protecting their privacy and individual privacy protection strategies. Moreover, the question to which degree individuals between the ages of 20 and 30 years are willing to grant commercial parties access to their private data to use digital health services and apps are put to the test. To answer this research question, results from 6 focus groups with 40 participants will be presented. The focus was put on this age segment that has grown up in a digitally immersed environment. Moreover, it is particularly the young generation who is not only interested in health and fitness but also already uses health-supporting apps or gadgets. Approximately one-third of the study participants were students. Subjects were recruited in August and September 2019 by two trained researchers via email and were offered an incentive for their participation. Overall, results indicate that the young generation is well informed about the growing data collection and is quite critical of it; moreover, they possess knowledge of the potential side effects associated with this data collection. Most respondents indicated to cautiously handle their data and consider privacy as highly relevant, utilizing a number of protective strategies to ensure the confidentiality of their information. Their willingness to share information in exchange for services was only moderately pronounced, particularly in the health context, since health data was seen as valuable and sensitive. The majority of respondents indicated to rather miss out on using digital and mobile health offerings in order to maintain their privacy. While this behavior might be an unintended consequence, it is an important piece of information for app developers and medical providers, who have to find a way to find a user base for their products against the background of rising user privacy concerns.

Keywords: digital health, privacy, privacy paradox, IoMT

Procedia PDF Downloads 113
1104 The Effect of Excel on Undergraduate Students’ Understanding of Statistics and the Normal Distribution

Authors: Masomeh Jamshid Nejad

Abstract:

Nowadays, statistical literacy is no longer a necessary skill but an essential skill with broad applications across diverse fields, especially in operational decision areas such as business management, finance, and economics. As such, learning and deep understanding of statistical concepts are essential in the context of business studies. One of the crucial topics in statistical theory and its application is the normal distribution, often called a bell-shaped curve. To interpret data and conduct hypothesis tests, comprehending the properties of normal distribution (the mean and standard deviation) is essential for business students. This requires undergraduate students in the field of economics and business management to visualize and work with data following a normal distribution. Since technology is interconnected with education these days, it is important to teach statistics topics in the context of Python, R-studio, and Microsoft Excel to undergraduate students. This research endeavours to shed light on the effect of Excel-based instruction on learners’ knowledge of statistics, specifically the central concept of normal distribution. As such, two groups of undergraduate students (from the Business Management program) were compared in this research study. One group underwent Excel-based instruction and another group relied only on traditional teaching methods. We analyzed experiential data and BBA participants’ responses to statistic-related questions focusing on the normal distribution, including its key attributes, such as the mean and standard deviation. The results of our study indicate that exposing students to Excel-based learning supports learners in comprehending statistical concepts more effectively compared with the other group of learners (teaching with the traditional method). In addition, students in the context of Excel-based instruction showed ability in picturing and interpreting data concentrated on normal distribution.

Keywords: statistics, excel-based instruction, data visualization, pedagogy

Procedia PDF Downloads 33
1103 Consensus, Federalism and Inter-State Water Disputes in India

Authors: Amrisha Pandey

Abstract:

Indian constitution has distributed the powers to govern and legislate between the centre and the state governments based on the list of subject-matter provided in the seventh schedule. By that schedule, the states are authorized to regulate the water resource within their territory. However, the centre/union government is authorized to regulate the inter-state water disputes. The powers entrusted to the union government mainly deals with the sharing of river water which flows through the territory of two or more states. For that purpose, a provision enumerated in Article 262 of the Constitution of India which empowers the parliament to resolve any such inter-state river water dispute. Therefore, the parliament has enacted the - ‘Inter-State River Water Dispute Tribunal, Act’, which allows the central/union government to constitute the tribunal for the adjudication of the disputes and expressly bars the jurisdiction of the judiciary in the concerned matter. This arrangement was intended to resolve the dispute using political or diplomatic means, without deliberately interfering with the sovereign power of the states to govern the water resource. The situation in present context is complicated and sensitive. Due to the change in climatic conditions; increasing demand for the limited resource; and the advanced understanding of the freshwater cycle, which is missing from the existing legal regime. The obsolete legal and political tools, the existing legislative mechanism and the institutional units do not seem to accommodate the rising challenge to regulate the resource. Therefore, resulting in the rise of the politicization of the inter-state water disputes. Against this background, this paper will investigate the inter-state river water dispute in India and will critically analyze the ability of the existing constitutional, and institutional units involved in the task. Moreover, the competence of the tribunal as the adjudicating body in present context will be analyzed using the long ongoing inter-state water dispute in India – The Cauvery Water Dispute, as the case study. To conduct the task undertaken in this paper the doctrinal methodology of the research is adopted. The disputes will also be investigated through the lens of sovereignty, which is accorded to the states using the theory of ‘separation of power’ and the ‘grant of internal sovereignty’, to its federal units of governance. The issue of sovereignty in this paper is discussed in two ways: 1) as the responsibility of the state - to govern the resource; and 2) as the obligation of the state - to govern the resource, arising from the sovereign power of the state. Furthermore, the duality of the sovereign power coexists in this analysis; the overall sovereign authority of the nation-state, and the internal sovereignty of the states as its federal units of governance. As a result, this investigation will propose institutional, legislative and judicial reforms. Additionally, it will suggest certain amendments to the existing constitutional provisions in order to avoid the contradictions in their scope and meaning in the light of the advanced hydrological understanding.

Keywords: constitution of India, federalism, inter-state river water dispute tribunal of India, sovereignty

Procedia PDF Downloads 128
1102 Kirchoff Type Equation Involving the p-Laplacian on the Sierpinski Gasket Using Nehari Manifold Technique

Authors: Abhilash Sahu, Amit Priyadarshi

Abstract:

In this paper, we will discuss the existence of weak solutions of the Kirchhoff type boundary value problem on the Sierpinski gasket. Where S denotes the Sierpinski gasket in R² and S₀ is the intrinsic boundary of the Sierpinski gasket. M: R → R is a positive function and h: S × R → R is a suitable function which is a part of our main equation. ∆p denotes the p-Laplacian, where p > 1. First of all, we will define a weak solution for our problem and then we will show the existence of at least two solutions for the above problem under suitable conditions. There is no well-known concept of a generalized derivative of a function on a fractal domain. Recently, the notion of differential operators such as the Laplacian and the p-Laplacian on fractal domains has been defined. We recall the result first then we will address the above problem. In view of literature, Laplacian and p-Laplacian equations are studied extensively on regular domains (open connected domains) in contrast to fractal domains. In fractal domains, people have studied Laplacian equations more than p-Laplacian probably because in that case, the corresponding function space is reflexive and many minimax theorems which work for regular domains is applicable there which is not the case for the p-Laplacian. This motivates us to study equations involving p-Laplacian on the Sierpinski gasket. Problems on fractal domains lead to nonlinear models such as reaction-diffusion equations on fractals, problems on elastic fractal media and fluid flow through fractal regions etc. We have studied the above p-Laplacian equations on the Sierpinski gasket using fibering map technique on the Nehari manifold. Many authors have studied the Laplacian and p-Laplacian equations on regular domains using this Nehari manifold technique. In general Euler functional associated with such a problem is Frechet or Gateaux differentiable. So, a critical point becomes a solution to the problem. Also, the function space they consider is reflexive and hence we can extract a weakly convergent subsequence from a bounded sequence. But in our case neither the Euler functional is differentiable nor the function space is known to be reflexive. Overcoming these issues we are still able to prove the existence of at least two solutions of the given equation.

Keywords: Euler functional, p-Laplacian, p-energy, Sierpinski gasket, weak solution

Procedia PDF Downloads 213