Search results for: critical spatial practice
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10959

Search results for: critical spatial practice

1419 Turkey at the End of the Second Decade of the 21st Century: A Secular or Religious Country?

Authors: Francesco Pisano

Abstract:

Islam has been an important topic in Turkey’s institutional identity. Since the dawn of the Turkish Republic, at the end of the First World War, the new Turkish leadership was urged to deal with the religious heritage of the Sultanate. Mustafa Kemal Ataturk, Turkey’s first President, led the country in a process of internal change, substantially modifying not merely the democratic stance of it, but also the way politics was addressing the Muslim faith. Islam was banned from the public sector of the society and was drastically marginalized to the mere private sphere of citizens’ lives. Headscarves were banned from institutional buildings together with any other religious practice, while the country was proceeding down a path of secularism and Westernization. This issue is demonstrated by the fact that even a new elected Prime Minister, Recep Tayyip Erdoğan, was initially barred from taking the institutional position, because of allegations that he had read a religious text while campaigning. Over the years, thanks to this initial internal shift, Turkey has often been seen by Western partners as one of the few countries that had managed to find a perfect balance between a democratic stance and an Islamic inherent nature. In the early 2000s, this led many academics to believe that Ankara could eventually have become the next European capital. Since then, the internal and external landscape of Turkey has drastically changed. Today, religion has returned to be an important point of reference for Turkish politics, considering also the failure of the European negotiations and the always more unstable external environment of the country. This paper wants to address this issue, looking at the important role religion has covered in the Turkish society and the way it has been politicized since the early years of the Republic. It will evolve from a more theoretical debate on secularism and the path of political westernization of Turkey under Ataturk’s rule to a more practical analysis of today’s situation, passing through the failure of Ankara’s accession into the EU and the current tense political relation with its traditional NATO allies. The final objective of this research, therefore, is not to offer a meticulous opinion on Turkey’s current international stance. This issue will be left entirely to the personal consideration of the reader. Rather, it will supplement the existing literature with a comprehensive and more structured analysis on the role Islam has played on Turkish politics since the early 1920s up until the political domestic revolution of the early 2000s, after the first electoral win of the Justice and Development Party (AKP).

Keywords: democracy, Islam, Mustafa Kemal Atatürk, Recep Tayyip Erdoğan, Turkey

Procedia PDF Downloads 186
1418 Efficacy of Ergonomics Ankle Support on Squatting Pushing Skills during the Second Stage of Labor

Authors: Yu-Ching Lin, Meei-Ling Gau, Ghi-Hwei Kao, Hung-Chang Lee

Abstract:

Objective: To compare the pushing experiences and birth outcomes of three different pushing positions during the second stage of labor. The three positions were: semi-recumbent, squatting, and squatting with the aid of ergonomically designed ankle supports. Methods: A randomized controlled trial was conducted at a regional teaching hospital in northern Taiwan. Data were collected from 168 primiparous women in their 38th to 42nd gestational week. None of the participants received epidural analgesia during labor and all were free of pregnancy and labor-related complications. Intervention: During labor, after full cervical dilation and when the fetal head had descended to at least the +1 station and had turned to the occiput anterior position, the experimental group was asked to push in the squatting position while wearing the ergonomically designed ankle supports; comparison group A was asked to push in the squatting position without the use of these supports; and comparison group B was asked to push in a standard semi-recumbent position. Measures: The participants completed a demographic and obstetrics datasheet, the Short Form McGill Pain Questionnaire (MPQ-SF), and the Labor Pushing Experience scale within 4-hours postpartum. Conclusion: In terms of delivery time, the duration between the start of pushing to crowning for the experimental group (squatting with ankle supports) averaged 25.52 minutes less (F =6.02, p< .05) than the time for comparison group B (semi-recumbent). Furthermore, the duration between the start of pushing to infant birth averaged 25.21 minutes less for the experimental group than for comparison group B (F =6.14, p< .05). Moreover, the experimental group had a lower average VAS pain score (5.05±3.22) than comparison group B and the average McGill pain score for the experimental group was lower than both comparison groups (F=18.12, p< .001). In summary, the participants in the group that delivered from a squatting position with ankle supports had better labor pushing experiences than their peers in the comparison groups. Results: In comparison to both unsupported squatting and semi-recumbent pushing, squatting with the aid of ergonomically designed ankle supports reduced pushing times, ameliorated labor pain, and improved the pushing experience. Clinical application and suggestion: The squatting with ankle-support intervention introduced in the present study may significantly reduce tiredness and difficulties in maintaining balance as well as increase pushing efficiency. Thus, this intervention may reduce the caring needs of women during the second stage of labor. This intervention may be introduced in midwifery education programs and in clinical practice as a method to improve the care of women during the second stage of labor.

Keywords: second stage of labor, pushing, squatting with ankle supports, squatting

Procedia PDF Downloads 261
1417 Effect of Thermal Treatment on Mechanical Properties of Reduced Activation Ferritic/Martensitic Eurofer Steel Grade

Authors: Athina Puype, Lorenzo Malerba, Nico De Wispelaere, Roumen Petrov, Jilt Sietsma

Abstract:

Reduced activation ferritic/martensitic (RAFM) steels like EUROFER97 are primary candidate structural materials for first wall application in the future demonstration (DEMO) fusion reactor. Existing steels of this type obtain their functional properties by a two-stage heat treatment, which consists of an annealing stage at 980°C for thirty minutes followed by quenching and an additional tempering stage at 750°C for two hours. This thermal quench and temper (Q&T) treatment creates a microstructure of tempered martensite with, as main precipitates, M23C6 carbides, with M = Fe, Cr and carbonitrides of MX type, e.g. TaC and VN. The resulting microstructure determines the mechanical properties of the steel. The ductility is largely determined by the tempered martensite matrix, while the resistance to mechanical degradation, determined by the spatial and size distribution of precipitates and the martensite crystals, plays a key role in the high temperature properties of the steel. Unfortunately, the high temperature response of EUROFER97 is currently insufficient for long term use in fusion reactors, due to instability of the matrix phase and coarsening of the precipitates at prolonged high temperature exposure. The objective of this study is to induce grain refinement by appropriate modifications of the processing route in order to increase the high temperature strength of a lab-cast EUROFER RAFM steel grade. The goal of the work is to obtain improved mechanical behavior at elevated temperatures with respect to conventionally heat treated EUROFER97. A dilatometric study was conducted to study the effect of the annealing temperature on the mechanical properties after a Q&T treatment. The microstructural features were investigated with scanning electron microscopy (SEM), electron back-scattered diffraction (EBSD) and transmission electron microscopy (TEM). Additionally, hardness measurements, tensile tests at elevated temperatures and Charpy V-notch impact testing of KLST-type MCVN specimens were performed to study the mechanical properties of the furnace-heated lab-cast EUROFER RAFM steel grade. A significant prior austenite grain (PAG) refinement was obtained by lowering the annealing temperature of the conventionally used Q&T treatment for EUROFER97. The reduction of the PAG results in finer martensitic constituents upon quenching, which offers more nucleation sites for carbide and carbonitride formation upon tempering. The ductile-to-brittle transition temperature (DBTT) was found to decrease with decreasing martensitic block size. Additionally, an increased resistance against high temperature degradation was accomplished in the fine grained martensitic materials with smallest precipitates obtained by tailoring the annealing temperature of the Q&T treatment. It is concluded that the microstructural refinement has a pronounced effect on the DBTT without significant loss of strength and ductility. Further investigation into the optimization of the processing route is recommended to improve the mechanical behavior of RAFM steels at elevated temperatures.

Keywords: ductile-to-brittle transition temperature (DBTT), EUROFER, reduced activation ferritic/martensitic (RAFM) steels, thermal treatments

Procedia PDF Downloads 280
1416 Particle Size Characteristics of Aerosol Jets Produced by A Low Powered E-Cigarette

Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida

Abstract:

Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.

Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry

Procedia PDF Downloads 26
1415 Hydrogen Induced Fatigue Crack Growth in Pipeline Steel API 5L X65: A Combined Experimental and Modelling Approach

Authors: H. M. Ferreira, H. Cockings, D. F. Gordon

Abstract:

Climate change is driving a transition in the energy sector, with low-carbon energy sources such as hydrogen (H2) emerging as an alternative to fossil fuels. However, the successful implementation of a hydrogen economy requires an expansion of hydrogen production, transportation and storage capacity. The costs associated with this transition are high but can be partly mitigated by adapting the current oil and natural gas networks, such as pipeline, an important component of the hydrogen infrastructure, to transport pure or blended hydrogen. Steel pipelines are designed to withstand fatigue, one of the most common causes of pipeline failure. However, it is well established that some materials, such as steel, can fail prematurely in service when exposed to hydrogen-rich environments. Therefore, it is imperative to evaluate how defects (e.g. inclusions, dents, and pre-existing cracks) will interact with hydrogen under cyclic loading and, ultimately, to what extent hydrogen induced failure will limit the service conditions of steel pipelines. This presentation will explore how the exposure of API 5L X65 to a hydrogen-rich environment and cyclic loads will influence its susceptibility to hydrogen induced failure. That evaluation will be performed by a combination of several techniques such as hydrogen permeation testing (ISO 17081:2014), fatigue crack growth (FCG) testing (ISO 12108:2018 and AFGROW modelling), combined with microstructural and fractographic analysis. The development of a FCG test setup coupled with an electrochemical cell will be discussed, along with the advantages and challenges of measuring crack growth rates in electrolytic hydrogen environments. A detailed assessment of several electrolytic charging conditions will also be presented, using hydrogen permeation testing as a method to correlate the different charging settings to equivalent hydrogen concentrations and effective diffusivity coefficients, not only on the base material but also on the heat affected zone and weld of the pipelines. The experimental work is being complemented with AFGROW, a useful FCG modelling software that has helped inform testing parameters and which will also be developed to ultimately help industry experts perform structural integrity analysis and remnant life characterisation of pipeline steels under representative conditions. The results from this research will allow to conclude if there is an acceleration of the crack growth rate of API 5L X65 under the influence of a hydrogen-rich environment, an important aspect that needs to be rectified instandards and codes of practice on pipeline integrity evaluation and maintenance.

Keywords: AFGROW, electrolytic hydrogen charging, fatigue crack growth, hydrogen, pipeline, steel

Procedia PDF Downloads 85
1414 Alternative Islamic Finance Channels and Instruments: An Evaluation of the Potential and Considerations in Light of Sharia Principles

Authors: Tanvir A. Uddin, Blake Goud

Abstract:

Emerging trends in FinTech-enabled alternative finance, which includes channels and instruments emerging outside the traditional financial system, heralds unprecedented opportunities to improve financial intermediation and increase access to finance. With widespread criticism of the mainstream Islamic banking and finance sector as either mimicking the conventional system, failing to achieve inclusive growth or both, industry stakeholders are turning to technology to show that finance can be done differently. This paper will outline the critical elements for successful deployment of technology to maximize benefit and minimize potential for harm from introduction of Islamic FinTech and propose recommendations for Islamic financial institutions, FinTech companies, regulators and other stakeholders who are integrating or who are considering introducing FinTech solutions. The paper will present an overview of literature, present relevant case studies and summarize the lessons from interviews conducted with Islamic FinTech founders from around the world. With growing central bank concerns about leveraged loans and ballooning private credit markets globally (estimated at $1.4 trillion), current and future Islamic FinTech operators are at risk of contributing to the problems they aim to solve by operating in a 'shadow banking' system. The paper will show that by systematising a robust theory of change linked to positive outcomes, utilising objective impact frameworks (e.g., the Impact Measurement Project) and instilling a risk management culture that is proactive about potential social harm (e.g., irresponsible lending), FinTech can enable the Islamic finance industry to support positive social impact and minimize harm in support of the maqasid. The adoption of FinTech within the Islamic finance context is still at a nascent stage and the recommendations we provide based on the limited experience to date will help address some of the major cross-cutting issues related to FinTech. Further research will be needed to elucidate in more detail issues relating to individual sectors and countries within the broader global Islamic finance industry.

Keywords: alternative finance, FinTech, Islamic finance, maqasid, theory of change

Procedia PDF Downloads 118
1413 Sensitivity and Commitment: A View on Parenthood in a Context of Placement Trajectory

Authors: A. De Serres-Lafontaine, S. Porlier, K. Poitras

Abstract:

Introduction: Placement is, without doubt, a challenging experience for both foster children and biological parents who witness their child being removed from their care. Yet, few studies have examined parenting in such a context through critical parental skills such as parental sensitivity and commitment. Sensitivity is described as the capacity of parents to respond accurately to their child’s needs in a warm, predictable and consistent way, whereas commitment is the ability of the parent to get involved physically and emotionally in an enduring relationship with his child. The research confirms the important role of parental sensitivity and parental commitment on child development following placement in foster care. Nevertheless, these studies were mainly conducted with foster parents, and few studies have examined these components of parenthood with biological parents. Method: This study evolves in two times. At first, 17 parents participated throughout a 90-minutes interview. It allowed to collect information regarding the sociodemographic situation, contacts, placement trajectory. Parental sensitivity is observed during a supervised parent-child contact. The second time occurred one to two years later and implied an at-home 90-minutes interview where we updated the information from the first interview and were able to assess the level of parental commitment. In this ongoing part of the study, five parents have already participated in implying the rest of them remain to be interviewed in the coming months - from October through December 2018. Results: Descriptive analysis from the first part of the study suggests the examination of two groups: 11 children have been reunified whereas six are still in foster care. Qualitative analysis allows to compare themes of sensitivity and commitment regarding if the reunification project occurs or not. Preliminary analysis about thematic content shows key components of parental commitment through parent’s reveal of the way they nurture a relationship with their child. Furthermore, preliminary analysis suggests that parental sensitivity is not associated with family reunification (r = 0,11, p = 0,74). Further analysis will be assessed with the date from the second part of the study to examine the potential association between commitment and reunification. Discussion: Parental sensitivity and commitment are fundamental to the well-being of the child in a placement trajectory. They need to be understood better as two different complex concepts and as two parenting skills that might have a way of echoing to one another when engaged in a specific context. Above all, a more accurate comprehension of parenting in a placement trajectory allows to sustain adequate intervention practices for birth parents and could change the way parental adequacy is assessed when reaching for reunification.

Keywords: child welfare, foster care, intervention practices, parenthood

Procedia PDF Downloads 163
1412 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 93
1411 Analyzing the Causes of Amblyopia among Patients in Tertiary Care Center: Retrospective Study in King Faisal Specialist Hospital and Research Center

Authors: Hebah M. Musalem, Jeylan El-Mansoury, Lin M. Tuleimat, Selwa Alhazza, Abdul-Aziz A. Al Zoba

Abstract:

Background: Amblyopia is a condition that affects the visual system triggering a decrease in visual acuity without a known underlying pathology. It is due to abnormal vision development in childhood or infancy. Most importantly, vision loss is preventable or reversible with the right kind of intervention in most of the cases. Strabismus, sensory defects, and anisometropia are all well-known causes of amblyopia. However, ocular misalignment in Strabismus is considered the most common form of amblyopia worldwide. The risk of developing amblyopia increases in premature children, developmentally delayed or children who had brain lesions affecting the visual pathway. The prevalence of amblyopia varies between 2 to 5 % in the world according to the literature. Objective: To determine the different causes of Amblyopia in pediatric patients seen in ophthalmology clinic of a tertiary care center, i.e. King Faisal Specialist Hospital and Research Center (KFSH&RC). Methods: This is a hospital based, random retrospective, based on reviewing patient’s files in the Ophthalmology Department of KFSH&RC in Riyadh city, Kingdom of Saudi Arabia. Inclusion criteria: amblyopic pediatric patients who attended the clinic from 2015 to 2016, who are between 6 months and 18 years old. Exclusion Criteria: patients above 18 years of age and any patient who is uncooperative to obtain an accurate vision or a proper refraction. Detailed ocular and medical history are recorded. The examination protocol includes a full ocular exam, full cycloplegic refraction, visual acuity measurement, ocular motility and strabismus evaluation. All data were organized in tables and graphs and analyzed by statistician. Results: Our preliminary results will be discussed on spot by our corresponding author. Conclusions: We focused on this study on utilizing various examination techniques which enhanced our results and highlighted a distinguished correlation between amblyopia and its’ causes. This paper recommendation emphasizes on critical testing protocols to be followed among amblyopic patient, especially in tertiary care centers.

Keywords: amblyopia, amblyopia causes, amblyopia diagnostic criterion, amblyopia prevalence, Saudi Arabia

Procedia PDF Downloads 139
1410 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 94
1409 Adversarial Attacks and Defenses on Deep Neural Networks

Authors: Jonathan Sohn

Abstract:

Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.

Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning

Procedia PDF Downloads 176
1408 Comparison of Early Post-operative Outcomes of Cardiac Surgery Patients Who Have Had Blood Transfusion Based on Fixed Cut-off Point versus of Change in Percentage of Basic Hematocrit Levels

Authors: Khosro Barkhordari, Fateme Sadr, Mina Pashang

Abstract:

Back ground: Blood transfusion is one of the major issues in cardiac surgery patients. Transfusing patients based on fixed cut-off points of hemoglobin is the current protocol in most institutions. The hemoglobin level of 7- 10 has been suggested for blood transfusion in cardiac surgery patients. We aimed to evaluate if blood transfusion based on change in percentage of hematocrit has different outcomes. Methods: In this retrospective cohort study, we investigated the early postoperative outcome of cardiac surgery patients who received blood transfusions at Tehran Heart Center Hospital, IRAN. We reviewed and analyzed the basic characteristics and clinical data of 700 patients who met our exclusion and inclusion criteria. The two groups of blood transfused patients were compared, those who have 30-50 percent decrease in basal hematocrit versus those with 10 -29 percent decrease. Results: This is ongoing study, and the results would be completed in two weeks after analysis of the date. Conclusion: Early analysis has shown no difference in early post-operative outcomes between the two groups, but final analysis will be completed in two weeks. 1-Department of Anesthesiology and Critical Care, Tehran Heart Center, Tehran University of Medical Sciences, Tehran, IRAN 2- Department of Research, Tehran Heart Center, Tehran, IRAN Quantitative variables were compared using the Student t-test or the Mann‐Whitney U test, as appropriate, while categorical variables were compared using the χ2 or the Fisher exact test, as required. Our intention was to compare the early postoperative outcomes between the two groups, which include 30 days mortality, Length of ICU stay, Length of hospital stay, Intubation time, Infection rate, acute kidney injury, and respiratory complications. The main goal was to find if transfusing blood based on changes in hematocrit from a basal level was better than to fixed cut-off point regarding early post-operative outcomes. This has not been studied enough and may need randomized control trials.

Keywords: post-operative, cardiac surgery, outcomes, blood transfusion

Procedia PDF Downloads 70
1407 Educational Debriefing in Prehospital Medicine: A Qualitative Study Exploring Educational Debrief Facilitation and the Effects of Debriefing

Authors: Maria Ahmad, Michael Page, Danë Goodsman

Abstract:

‘Educational’ debriefing – a construct distinct from clinical debriefing – is used following simulated scenarios and is central to learning and development in fields ranging from aviation to emergency medicine. However, little research into educational debriefing in prehospital medicine exists. This qualitative study explored the facilitation and effects of prehospital educational debriefing and identified obstacles to debriefing, using the London’s Air Ambulance Pre-Hospital Care Course (PHCC) as a model. Method: Ethnographic observations of moulages and debriefs were conducted over two consecutive days of the PHCC in October 2019. Detailed contemporaneous field notes were made and analysed thematically. Subsequently, seven one-to-one, semi-structured interviews were conducted with four PHCC debrief facilitators and three course participants to explore their experiences of prehospital educational debriefing. Interview data were manually transcribed and analysed thematically. Results: Four overarching themes were identified: the approach to the facilitation of debriefs, effects of debriefing, facilitator development, and obstacles to debriefing. The unpredictable debriefing environment was seen as both hindering and paradoxically benefitting educational debriefing. Despite using varied debriefing structures, facilitators emphasised similar key debriefing components, including exploring participants’ reasoning and sharing experiences to improve learning and prevent future errors. Debriefing was associated with three principal effects: releasing emotion; learning and improving, particularly participant compound learning as they progressed through scenarios; and the application of learning to clinical practice. Facilitator training and feedback were central to facilitator learning and development. Several obstacles to debriefing were identified, including mismatch of participant and facilitator agendas, performance pressure, and time. Interestingly, when used appropriately in the educational environment, these obstacles may paradoxically enhance learning. Conclusions: Educational debriefing in prehospital medicine is complex. It requires the establishment of a safe learning environment, an understanding of participant agendas, and facilitator experience to maximise participant learning. Aspects unique to prehospital educational debriefing were identified, notably the unpredictable debriefing environment, interdisciplinary working, and the paradoxical benefit of educational obstacles for learning. This research also highlights aspects of educational debriefing not extensively detailed in the literature, such as compound participant learning, display of ‘professional honesty’ by facilitators, and facilitator learning, which require further exploration. Future research should also explore educational debriefing in other prehospital services.

Keywords: debriefing, prehospital medicine, prehospital medical education, pre-hospital care course

Procedia PDF Downloads 195
1406 Generative Pre-Trained Transformers (GPT-3) and Their Impact on Higher Education

Authors: Sheelagh Heugh, Michael Upton, Kriya Kalidas, Stephen Breen

Abstract:

This article aims to create awareness of the opportunities and issues the artificial intelligence (AI) tool GPT-3 (Generative Pre-trained Transformer-3) brings to higher education. Technological disruptors have featured in higher education (HE) since Konrad Klaus developed the first functional programmable automatic digital computer. The flurry of technological advances, such as personal computers, smartphones, the world wide web, search engines, and artificial intelligence (AI), have regularly caused disruption and discourse across the educational landscape around harnessing the change for the good. Accepting AI influences are inevitable; we took mixed methods through participatory action research and evaluation approach. Joining HE communities, reviewing the literature, and conducting our own research around Chat GPT-3, we reviewed our institutional approach to changing our current practices and developing policy linked to assessments and the use of Chat GPT-3. We review the impact of GPT-3, a high-powered natural language processing (NLP) system first seen in 2020 on HE. Historically HE has flexed and adapted with each technological advancement, and the latest debates for educationalists are focusing on the issues around this version of AI which creates natural human language text from prompts and other forms that can generate code and images. This paper explores how Chat GPT-3 affects the current educational landscape: we debate current views around plagiarism, research misconduct, and the credibility of assessment and determine the tool's value in developing skills for the workplace and enhancing critical analysis skills. These questions led us to review our institutional policy and explore the effects on our current assessments and the development of new assessments. Conclusions: After exploring the pros and cons of Chat GTP-3, it is evident that this form of AI cannot be un-invented. Technology needs to be harnessed for positive outcomes in higher education. We have observed that materials developed through AI and potential effects on our development of future assessments and teaching methods. Materials developed through Chat GPT-3 can still aid student learning but lead to redeveloping our institutional policy around plagiarism and academic integrity.

Keywords: artificial intelligence, Chat GPT-3, intellectual property, plagiarism, research misconduct

Procedia PDF Downloads 73
1405 Shaping Students’ Futures: Evaluating Professors’ Effectiveness as Academic Advisors in Postsecondary Institutions

Authors: Mohamad Musa, Khaldoun Aldiabat, Chelsea McLellan

Abstract:

In higher education, academic advising and counseling are pivotal for guiding students towards successful academic and professional trajectories. Within this landscape, professors play a critical role as academic advisors, offering guidance and support to students navigating their educational journey. This study endeavors to delve into the effectiveness of professors in this capacity through a comprehensive quantitative survey. Amidst the research objectives lies a profound exploration of students' perceptions regarding professors' effectiveness as academic advisors. Additionally, the study aims to elucidate the nuanced strengths and limitations inherent in professors' advisory roles. Through meticulous examination, the research seeks to uncover the profound impact of professors' engagement on student academic accomplishments and contentment. Moreover, it will scrutinize the requisite qualifications, training, and support mechanisms necessary for professors to excel in advisory roles. Utilizing a quantitative survey methodology, this research will gather invaluable insights into students' perspectives on professors' advisory competencies. Rigorous statistical analysis of survey responses will illuminate the efficacy of professors as academic advisors. The survey instrument will intricately measure diverse dimensions such as students' satisfaction levels with advisory sessions, the perceived efficacy of advice rendered by professors, and the holistic influence of professors' involvement on academic triumphs. Anticipated outcomes encompass a meticulous quantitative evaluation of professors' efficacy in academic advisory roles. Moreover, the research endeavors to delineate areas of proficiency and areas necessitating refinement within professors' advisory practices. Through these efforts, the study aims to provide valuable insights that can inform strategies for enhancing professors' advisory practices and optimizing the support systems available to students in higher education institutions. The study seeks to go beyond surface-level evaluations by delving into the intricate relationship between professors' involvement in academic advising and student academic outcomes. By unraveling this complex interplay, the research endeavors to shed light on the mechanisms through which professors' guidance impacts students' academic success, satisfaction, and overall educational experience.

Keywords: academic advising, professors, effectiveness, quantitative survey, student outcomes

Procedia PDF Downloads 27
1404 Pareto Optimal Material Allocation Mechanism

Authors: Peter Egri, Tamas Kis

Abstract:

Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.

Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling

Procedia PDF Downloads 315
1403 The Potential of Role Models in Enhancing Smokers' Readiness to Change (Decision to Quit Smoking): A Case Study of Saudi National Anti-Smoking Campaign

Authors: Ghada M. AlSwayied, Anas N. AlHumaid

Abstract:

Smoking has been linked to thousands of deaths worldwide. Around three million adults continue to use tobacco each day in Saudi Arabia; a sign that smoking is prevalent among Saudi population and obviously considered as a public health threat. Although the awareness against smoking is continuously running, it can be observed that smoking behavior increases noticeably as common practice especially among young adults across the world. Therefore, it was an essential step to guess what does motivate smokers to think about quit smoking. Can a graphic and emotional ad that is focusing on health consequences do really make a difference? A case study has been conducted on the Annual Anti-Smoking National Campaign, which was provided by Saudi Ministry of Health in the period of May 2017. To assess campaign’s effects on the number of calls, the number of visits and online access to health messages during and after the campaign period from May to August compared with the previous campaign in 2016. The educational video was selected as a primary tool to deliver the smoking health message. The Minister of Health who is acting as a role model for young adults was used to deliver a direct message to smokers with an avoidance of smoking cues usage. Due to serious consequences of smoking, the Minister of Health delivered the news of canceling the media campaign and directing the budget to smoking cessation clinics. It was shown that the positive responses and interactions on the campaign were obviously remarkable; achieving a high rate of recall and recognition. During the campaign, the number of calls to book for a visit reached 45880 phone calls, and the total online views ran to 1,253,879. Whereas, clinic visit raised up to 213 cumulative percent. Interestingly, a total number of 15,192 patients visited the clinics along three months compared with the last year campaign’s period, which was merely 4850 patients. Furthermore, around half of patients who visited the clinics were in the age from 26 to 40-year-old. There was a great progress in enhancing public awareness on: 'where to go' to assist smokers in making a quit attempt. With regard to the stages of change theory, it was predicted that by following direct-message technique; the proportion of patients in the contemplation and preparation stages would be increased. There was no process evaluation obtained to assess implementation of the campaigns’ activities.

Keywords: smoking, health promotion, role model, educational material, intervention, community health

Procedia PDF Downloads 129
1402 Application of Recycled Paper Mill Sludge on the Growth of Khaya Senegalensis and Its Effect on Soil Properties, Nutrients and Heavy Metals

Authors: A. Rosazlin Abdullah, I. Che Fauziah, K. Wan Rasidah, A. B. Rosenani

Abstract:

The paper industry performs an essential role in the global economy of the world. A study was conducted on the paper mill sludge that is applied on the Khaya senegalensis for 1 year planning period at University Agriculture Park, Puchong, Selangor, Malaysia to determine the growth of Khaya senegalensis, soil properties, nutrients concentrations and effects on the status of heavy metals. Paper Mill Sludge (PMS) and composted Recycled Paper Mill Sludge (RPMS) were used with different rates of nitrogen (0, 150, 300 and 600 kg ha-1) at the ratio of 1:1 (Recycled Paper Mill Sludge (RPMS) : Empty Fruit Brunch (EFB). The growth parameters were measured twice a month for 1 year. Plant nutrients and heavy metal uptake were determined. The paper mill sludge has the potential to be a supplementary N fertilizer as well as a soil amendment. The application of RPMS with N, significantly contributed to the improvement in plant growth parameters such as plant height (4.24 m), basal diameter (10.30 cm), total plant biomass and improved soil physical and chemical properties. The pH, EC, available P and total C in soil were varied among the treatments during the planting period. The treatments with raw and RPM compost had higher pH values than those applied with inorganic fertilizer and control. Nevertheless, there was no salinity problem recorded during the planting period and available P in soil treated with raw and RPMS compost was higher than the control plots that reflects the mineralization of organic P from the decomposition of pulp sludge. The weight of the free and occluded light fractions of carbon concentration was significantly higher in the soils treated with raw and RPMS compost. The application of raw and composted RPMS gave significantly higher concentration of the heavy metals, but the total concentrations of heavy metals in the soils were below the critical values. Hence, the paper mill sludge can be successfully used as soil amendment in acidic soil without any serious threat. The use of paper mill sludge for the soil fertility, shows improvement in land application signifies a unique opportunity to recycle sludge back to the land to alleviate the potential waste management problem.

Keywords: growth, heavy metals, nutrients uptake, production, waste management

Procedia PDF Downloads 352
1401 Nutrition Transition in Bangladesh: Multisectoral Responsiveness of Health Systems and Innovative Measures to Mobilize Resources Are Required for Preventing This Epidemic in Making

Authors: Shusmita Khan, Shams El Arifeen, Kanta Jamil

Abstract:

Background: Nutrition transition in Bangladesh has progressed across various relevant socio-demographic contextual issues. For a developing country like Bangladesh, its is believed that, overnutrition is less prevalent than undernutrition. However, recent evidence suggests that a rapid shift is taking place where overweight is subduing underweight. With this rapid increase, for Bangladesh, it will be challenging to achieve the global agenda on halting overweight and obesity. Methods: A secondary analysis was performed from six successive national demographic and health surveys to get the trend on undernutrition and overnutrition for women from reproductive age. In addition, national relevant policy papers were reviewed to determine the countries readiness for whole of the systems approach to tackle this epidemic. Results: Over the last decade, the proportion of women with low body mass index (BMI<18.5), an indicator of undernutrition, has decreased markedly from 34% to 19%. However, the proportion of overweight women (BMI ≥25) increased alarmingly from 9% to 24% over the same period. If the WHO cutoff for public health action (BMI ≥23) is used, the proportion of overweight women has increased from 17% in 2004 to 39% in 2014. The increasing rate of obesity among women is a major challenge to obstetric practice for both women and fetuses. In the long term, overweight women are also at risk of future obesity, diabetes, hyperlipidemia, hypertension, and heart disease. These diseases have serious impact on health care systems. Costs associated with overweight and obesity involves direct and indirect costs. Direct costs include preventive, diagnostic, and treatment services related to obesity. Indirect costs relate to morbidity and mortality costs including productivity. Looking at the Bangladesh Health Facility Survey, it is found that the country is bot prepared for providing nutrition-related health services, regarding prevention, screening, management and treatment. Therefore, if this nutrition transition is not addressed properly, Bangladesh will not be able to achieve the target of the NCD global monitoring framework of the WHO. Conclusion: Addressing this nutrition transition requires contending ‘malnutrition in all its forms’ and addressing it with integrated approaches. Whole of the systems action is required at all levels—starting from improving multi-sectoral coordination to scaling up nutrition-specific and nutrition-sensitive mainstreamed interventions keeping health system in mind.

Keywords: nutrition transition, Bangladesh, health system, undernutrition, overnutrition, obesity

Procedia PDF Downloads 269
1400 A Theoretical Framework of Patient Autonomy in a High-Tech Care Context

Authors: Catharina Lindberg, Cecilia Fagerstrom, Ania Willman

Abstract:

Patients in high-tech care environments are usually dependent on both formal/informal caregivers and technology, highlighting their vulnerability and challenging their autonomy. Autonomy presumes that a person has education, experience, self-discipline and decision-making capacity. Reference to autonomy in relation to patients in high-tech care environments could, therefore, be considered paradoxical, as in most cases these persons have impaired physical and/or metacognitive capacity. Therefore, to understand the prerequisites for patients to experience autonomy in high-tech care environments and to support them, there is a need to enhance knowledge and understanding of the concept of patient autonomy in this care context. The development of concepts and theories in a practice discipline such as nursing helps to improve both nursing care and nursing education. Theoretical development is important when clarifying a discipline, hence, a theoretical framework could be of use to nurses in high-tech care environments to support and defend the patient’s autonomy. A meta-synthesis was performed with the intention to be interpretative and not aggregative in nature. An amalgamation was made of the results from three previous studies, carried out by members of the same research group, focusing on the phenomenon of patient autonomy from a patient perspective within a caring context. Three basic approaches to theory development: derivation, synthesis, and analysis provided an operational structure that permitted the researchers to move back and forth between these approaches during their work in developing a theoretical framework. The results from the synthesis delineated that patient autonomy in a high-tech care context is: To be in control though trust, co-determination, and transition in everyday life. The theoretical framework contains several components creating the prerequisites for patient autonomy. Assumptions and propositional statements that guide theory development was also outlined, as were guiding principles for use in day-to-day nursing care. Four strategies used by patients to remain or obtain patient autonomy in high-tech care environments were revealed: the strategy of control, the strategy of partnership, the strategy of trust, and the strategy of transition. This study suggests an extended knowledge base founded on theoretical reasoning about patient autonomy, providing an understanding of the strategies used by patients to achieve autonomy in the role of patient, in high-tech care environments. When possessing knowledge about the patient perspective of autonomy, the nurse/carer can avoid adopting a paternalistic or maternalistic approach. Instead, the patient can be considered to be a partner in care, allowing care to be provided that supports him/her in remaining/becoming an autonomous person in the role of patient.

Keywords: autonomy, caring, concept development, high-tech care, theory development

Procedia PDF Downloads 191
1399 Evaluation of Australian Open Banking Regulation: Balancing Customer Data Privacy and Innovation

Authors: Suman Podder

Abstract:

As Australian ‘Open Banking’ allows customers to share their financial data with accredited Third-Party Providers (‘TPPs’), it is necessary to evaluate whether the regulators have achieved the balance between protecting customer data privacy and promoting data-related innovation. Recognising the need to increase customers’ influence on their own data, and the benefits of data-related innovation, the Australian Government introduced ‘Consumer Data Right’ (‘CDR’) to the banking sector through Open Banking regulation. Under Open Banking, TPPs can access customers’ banking data that allows the TPPs to tailor their products and services to meet customer needs at a more competitive price. This facilitated access and use of customer data will promote innovation by providing opportunities for new products and business models to emerge and grow. However, the success of Open Banking depends on the willingness of the customers to share their data, so the regulators have augmented the protection of data by introducing new privacy safeguards to instill confidence and trust in the system. The dilemma in policymaking is that, on the one hand, lenient data privacy laws will help the flow of information, but at the risk of individuals’ loss of privacy, on the other hand, stringent laws that adequately protect privacy may dissuade innovation. Using theoretical and doctrinal methods, this paper examines whether the privacy safeguards under Open Banking will add to the compliance burden of the participating financial institutions, resulting in the undesirable effect of stifling other policy objectives such as innovation. The contribution of this research is three-fold. In the emerging field of customer data sharing, this research is one of the few academic studies on the objectives and impact of Open Banking in the Australian context. Additionally, Open Banking is still in the early stages of implementation, so this research traces the evolution of Open Banking through policy debates regarding the desirability of customer data-sharing. Finally, the research focuses not only on the customers’ data privacy and juxtaposes it with another important objective of promoting innovation, but it also highlights the critical issues facing the data-sharing regime. This paper argues that while it is challenging to develop a regulatory framework for protecting data privacy without impeding innovation and jeopardising yet unknown opportunities, data privacy and innovation promote different aspects of customer welfare. This paper concludes that if a regulation is appropriately designed and implemented, the benefits of data-sharing will outweigh the cost of compliance with the CDR.

Keywords: consumer data right, innovation, open banking, privacy safeguards

Procedia PDF Downloads 128
1398 A Critical Examination of the Iranian National Legal Regulation of the Ecosystem of Lake Urmia

Authors: Siavash Ostovar

Abstract:

The Iranian national Law on the Ramsar Convention (officially known as the Convention of International Wetlands and Aquatic Birds' Habitat Wetlands) was approved by the Senate and became a law in 1974 after the ratification of the National Council. There are other national laws with the aim of preservation of environment in the country. However, Lake Urmia which is declared a wetland of international importance by the Ramsar Convention in 1971 and designated a UNESCO Biosphere Reserve in 1976 is now at the brink of total disappearance due mainly to the climate change, water mismanagement, dam construction, and agricultural deficiencies. Lake Urmia is located in the north western corner of Iran. It is the third largest salt water lake in the world and the largest lake in the Middle East. Locally, it is designated as a National Park. It is, indeed, a unique lake both nationally and internationally. This study investigated how effective the national legal regulation of the ecosystem of Lake Urmia is in Iran. To do so, the Iranian national laws as Enforcement of Ramsar Convention in the country including three nationally established laws of (i) Five sets of laws for the programme of economic, social and cultural development of Islamic Republic of Iran, (ii) The Iranian Penal Code, (iii) law of conservation, restoration and management of the country were investigated. Using black letter law methods, it was revealed that (i) regarding the national five sets of laws; the benchmark to force the implementation of the legislations and policies is not set clearly. In other words, there is no clear guarantee to enforce these legislations and policies at the time of deviation and violation; (ii) regarding the Penal Code, there is lack of determining the environmental crimes, determining appropriate penalties for the environmental crimes, implementing those penalties appropriately, monitoring and training programmes precisely; (iii) regarding the law of conservation, restoration and management, implementation of this regulation is adjourned to preparation, announcement and approval of several categories of enactments and guidelines. In fact, this study used a national environmental catastrophe caused by drying up of Lake Urmia as an excuse to direct the attention to the weaknesses of the existing national rules and regulations. Finally, as we all depend on the natural world for our survival, this study recommended further research on every environmental issue including the Lake Urmia.

Keywords: conservation, environmental law, Lake Urmia, national laws, Ramsar Convention, water management, wetlands

Procedia PDF Downloads 189
1397 Energy Storage Modelling for Power System Reliability and Environmental Compliance

Authors: Rajesh Karki, Safal Bhattarai, Saket Adhikari

Abstract:

Reliable and economic operation of power systems are becoming extremely challenging with large scale integration of renewable energy sources due to the intermittency and uncertainty associated with renewable power generation. It is, therefore, important to make a quantitative risk assessment and explore the potential resources to mitigate such risks. Probabilistic models for different energy storage systems (ESS), such as the flywheel energy storage system (FESS) and the compressed air energy storage (CAES) incorporating specific charge/discharge performance and failure characteristics suitable for probabilistic risk assessment in power system operation and planning are presented in this paper. The proposed methodology used in FESS modelling offers flexibility to accommodate different configurations of plant topology. It is perceived that CAES has a high potential for grid-scale application, and a hybrid approach is proposed, which embeds a Monte-Carlo simulation (MCS) method in an analytical technique to develop a suitable reliability model of the CAES. The proposed ESS models are applied to a test system to investigate the economic and reliability benefits of the energy storage technologies in system operation and planning, as well as to assess their contributions in facilitating wind integration during different operating scenarios. A comparative study considering various storage system topologies are also presented. The impacts of failure rates of the critical components of ESS on the expected state of charge (SOC) and the performance of the different types of ESS during operation are illustrated with selected studies on the test system. The paper also applies the proposed models on the test system to investigate the economic and reliability benefits of the different ESS technologies and to evaluate their contributions in facilitating wind integration during different operating scenarios and system configurations. The conclusions drawn from the study results provide valuable information to help policymakers, system planners, and operators in arriving at effective and efficient policies, investment decisions, and operating strategies for planning and operation of power systems with large penetrations of renewable energy sources.

Keywords: flywheel energy storage, compressed air energy storage, power system reliability, renewable energy, system planning, system operation

Procedia PDF Downloads 109
1396 The Role of Autophagy Modulation in Angiotensin-II Induced Hypertrophy

Authors: Kitti Szoke, Laszlo Szoke, Attila Czompa, Arpad Tosaki, Istvan Lekli

Abstract:

Autophagy plays an important role in cardiac hypertrophy, which is one of the most common causes of heart failure in the world. This self-degradative catabolic process, responsible for protein quality control, balancing sources of energy at critical times, and elimination of damaged organelles. The autophagic activity can be triggered by starvation, oxidative stress, or pharmacological agents, like rapamycin. This induced autophagy can promote cell survival during starvation or pathological stress. In this study, it is investigated the effect of the induced autophagic process on angiotensin induced hypertrophic H9c2 cells. In our study, it is used H9c2 cells as an in vitro model. To induce hypertrophy, cells were treated with 10000 nM angiotensin-II, and to activate autophagy, 100 nM rapamycin treatment was used. The following groups were formed: 1: control, 2: 10000 nM AT-II, 3: 100 nM rapamycin, 4: 100 nM rapamycin pretreatment then 10000 nM AT-II. The cell viability was examined via MTT (cell proliferation assay) assay. The cells were stained with rhodamine-conjugated phalloidin and DAPI to visualize F-actin filaments and cell nuclei then the cell size alteration was examined in a fluorescence microscope. Furthermore, the expression levels of autophagic and apoptotic proteins such as Beclin-1, p62, LC3B-II, Cleaved Caspase-3 were evaluated by Western blot. MTT assay result suggests that the used pharmaceutical agents in the tested concentrations did not have a toxic effect; however, at group 3, a slight decrement was detected in cell viability. In response to AT-II treatment, a significant increase was detected in the cell size; cells became hypertrophic. However, rapamycin pretreatment slightly reduced the cell size compared to group 2. Western blot results showed that AT-II treatment-induced autophagy, because the increased expression of Beclin-1, p62, LC3B-II were observed. However, due to the incomplete autophagy, the apoptotic Cleaved Caspase-3 expression also increased. Rapamycin pretreatment up-regulated Beclin-1 and LC3B-II, down-regulated p62 and Cleaved Caspase-3, indicating that rapamycin-induced autophagy can restore the normal autophagic flux. Taken together, our results suggest that rapamycin activated autophagy reduces angiotensin-II induced hypertrophy.

Keywords: angiotensin-II, autophagy, H9c2 cell line, hypertrophy, rapamycin

Procedia PDF Downloads 132
1395 Mothers’ Experiences of Continuing Their Pregnancy after Prenatally Receiving a Diagnosis of Down Syndrome

Authors: Sevinj Asgarova

Abstract:

Within the last few decades, major advances in the field of prenatal testing have transpired yet little research regarding the experiences of mothers who chose to continue their pregnancies after prenatally receiving a diagnosis of Down Syndrome (DS) has been undertaken. Using social constructionism and interpretive description, this retrospective research study explores this topic from the point of view of the mothers involved and provides insight as to how the experience could be improved. Using purposive sampling, 23 mothers were recruited from British Columbia (n=11) and Ontario (n=12) in Canada. Data retrieved through semi-structured in-depth interviews were analyzed using inductive, constant comparative analysis, the major analytical techniques of interpretive description. Four primary phases emerged from the data analysis 1) healthcare professional-mothers communications, 2) initial emotional response, 3) subsequent decision-making and 4) an adjustment and reorganization of lifestyle to the preparation for the birth of the child. This study validates the individualized and contextualized nature of mothers’ decisions as influenced by multiple factors, with moral values/spiritual beliefs being significant. The mothers’ ability to cope was affected by the information communicated to them about their unborn baby’s diagnosis and the manner in which that information was delivered to them. Mothers used emotional coping strategies, dependent upon support from partners, family, and friends, as well as from other families who have children with DS. Additionally, they employed practical coping strategies, such as engaging in healthcare planning, seeking relevant information, and reimagining and reorganizing their lifestyle. Over time many families gained a sense of control over their situation and readjusted to the preparation for the birth of the child. Many mothers expressed the importance of maintaining positivity and hopefulness with respect to positive outcomes and opportunities for their children. The comprehensive information generated through this study will also provide healthcare professionals with relevant information to assist them in understanding the informational and emotional needs of these mothers. This should lead to an improvement in their practice and enhance their ability to intervene appropriately and effectively, better offering improved support to parents dealing with a diagnosis of DS for their child.

Keywords: continuing affected pregnancy, decision making, disability, down syndrome, eugenic social attitudes, inequalities, life change events, prenatal care, prenatal testing, qualitative research, social change, social justice

Procedia PDF Downloads 87
1394 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture

Authors: Charbel Aoun, Loic Lagadec

Abstract:

A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.

Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS

Procedia PDF Downloads 156
1393 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models

Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri

Abstract:

Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.

Keywords: multimodal transportation, demand modeling, travel behavior, statistical models

Procedia PDF Downloads 149
1392 Influence of Long-Term Variability in Atmospheric Parameters on Ocean State over the Head Bay of Bengal

Authors: Anindita Patra, Prasad K. Bhaskaran

Abstract:

The atmosphere-ocean is a dynamically linked system that influences the exchange of energy, mass, and gas at the air-sea interface. The exchange of energy takes place in the form of sensible heat, latent heat, and momentum commonly referred to as fluxes along the atmosphere-ocean boundary. The large scale features such as El Nino and Southern Oscillation (ENSO) is a classic example on the interaction mechanism that occurs along the air-sea interface that deals with the inter-annual variability of the Earth’s Climate System. Most importantly the ocean and atmosphere as a coupled system acts in tandem thereby maintaining the energy balance of the climate system, a manifestation of the coupled air-sea interaction process. The present work is an attempt to understand the long-term variability in atmospheric parameters (from surface to upper levels) and investigate their role in influencing the surface ocean variables. More specifically the influence of atmospheric circulation and its variability influencing the mean Sea Level Pressure (SLP) has been explored. The study reports on a critical examination of both ocean-atmosphere parameters during a monsoon season over the head Bay of Bengal region. A trend analysis has been carried out for several atmospheric parameters such as the air temperature, geo-potential height, and omega (vertical velocity) for different vertical levels in the atmosphere (from surface to the troposphere) covering a period from 1992 to 2012. The Reanalysis 2 dataset from the National Centers for Environmental Prediction-Department of Energy (NCEP-DOE) was used in this study. The study signifies that the variability in air temperature and omega corroborates with the variation noticed in geo-potential height. Further, the study advocates that for the lower atmosphere the geo-potential heights depict a typical east-west contrast exhibiting a zonal dipole behavior over the study domain. In addition, the study clearly brings to light that the variations over different levels in the atmosphere plays a pivotal role in supporting the observed dipole pattern as clearly evidenced from the trends in SLP, associated surface wind speed and significant wave height over the study domain.

Keywords: air temperature, geopotential height, head Bay of Bengal, long-term variability, NCEP reanalysis 2, omega, wind-waves

Procedia PDF Downloads 213
1391 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 226
1390 Narcissism and Kohut's Self-Psychology: Self Practices in Service of Self-Transcendence

Authors: Noelene Rose

Abstract:

The DSM has been plagued with conceptual issues since its inception, not least discriminant validity and comorbidity issues. An attempt to remain a-theoretical in the divide between the psycho-dynamicists and the behaviourists contributed to much of this, in particular relating to the Personality Disorders. With the DSM-5, although the criterion have remained unchanged, major conceptual and structural directions have been flagged and proposed in section III. The biggest changes concern the Personality Disorders. While Narcissistic Personality Disorder (NPD) was initially tagged for removal, instead the addition of section III proposes a move away from a categorical approach to a more dimensional approach, with a measure of Global Function of Personality. This global measure is an assessment of impairment of self-other relations; a measure of trait narcissism. In the same way mainstream psychology has struggled in its diagnosis of narcissism, so too in its treatment. Kohut’s self psychology represents the most significant inroad in theory and treatment for the narcissistic disorders. Kohut had moved away from a categorical system, towards disorders of the self. According to this theory, disorders of the self are the result of childhood trauma (impaired attunement) resulting in a developmental arrest. Self-psychological, Psychodynamic treatment of narcissism, however, is expensive, in time and money and outside the awareness or access of most people. There is more than a suggestion that narcissism is on the increase, created in trauma and worsened by a fearful world climate. A dimensional model of narcissism, from mild to severe, requires cut off points for diagnosis. But where do we draw the line? Mainstream psychology is inclined to set it high when there is some degree of impairment in functioning in daily life. Transpersonal Psychology is inclined to set it low, with the concept that we all have some degree of narcissism and that it is the point and the path of our life journey to transcend our focus on our selves. Mainstream psychology stops its focus on trait narcissism with a healthy level of self esteem, but it is at this point that Transpersonal Psychology can complement the discussion. From a Transpersonal point of view, failure to begin the process of self-transcendence will also create emotional symptoms of meaning or purpose, often later in our lives, and is also conceived of as a developmental arrest. The maps for this transcendence are hidden in plain sight; in the chakras of kundalini yoga, in the sacraments of the Catholic Church, in the Kabbalah tree of life of Judaism, in Maslow’s hierarchy of needs, to name a few. This paper outlines some proposed research exploring the use of daily practices that can be incorporated into the therapy room; practices that utilise meditation, visualisation and imagination: that are informed by spiritual technology and guided by the psychodynamic theory of Self Psychology.

Keywords: narcissism, self-psychology, self-practice, self-transcendence

Procedia PDF Downloads 243