Search results for: finite difference simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10901

Search results for: finite difference simulation

311 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 108
310 An Indispensable Parameter in Lipid Ratios to Discriminate between Morbid Obesity and Metabolic Syndrome in Children: High Density Lipoprotein Cholesterol

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Obesity is a low-grade inflammatory disease and may lead to health problems such as hypertension, dyslipidemia, diabetes. It is also associated with important risk factors for cardiovascular diseases. This requires the detailed evaluation of obesity, particularly in children. The aim of this study is to enlighten the potential associations between lipid ratios and obesity indices and to introduce those with discriminating features among children with obesity and metabolic syndrome (MetS). A total of 408 children (aged between six and eighteen years) participated in the scope of the study. Informed consent forms were taken from the participants and their parents. Ethical Committee approval was obtained. Anthropometric measurements such as weight, height as well as waist, hip, head, neck circumferences and body fat mass were taken. Systolic and diastolic blood pressure values were recorded. Body mass index (BMI), diagnostic obesity notation model assessment index-II (D2 index), waist-to-hip, head-to-neck ratios were calculated. Total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDLChol), low-density lipoprotein cholesterol (LDLChol) analyses were performed in blood samples drawn from 110 children with normal body weight, 164 morbid obese (MO) children and 134 children with MetS. Age- and sex-adjusted BMI percentiles tabulated by World Health Organization were used to classify groups; normal body weight, MO and MetS. 15th-to-85th percentiles were used to define normal body weight children. Children, whose values were above the 99th percentile, were described as MO. MetS criteria were defined. Data were evaluated statistically by SPSS Version 20. The degree of statistical significance was accepted as p≤0.05. Mean±standard deviation values of BMI for normal body weight children, MO children and those with MetS were 15.7±1.1, 27.1±3.8 and 29.1±5.3 kg/m2, respectively. Corresponding values for the D2 index were calculated as 3.4±0.9, 14.3±4.9 and 16.4±6.7. Both BMI and D2 index were capable of discriminating the groups from one another (p≤0.01). As far as other obesity indices were considered, waist-to hip and head-to-neck ratios did not exhibit any statistically significant difference between MO and MetS groups (p≥0.05). Diagnostic obesity notation model assessment index-II was correlated with the triglycerides-to-HDL-C ratio in normal body weight and MO (r=0.413, p≤0.01 and r=0.261, (p≤0.05, respectively). Total cholesterol-to-HDL-C and LDL-C-to-HDL-C showed statistically significant differences between normal body weight and MO as well as MO and MetS (p≤0.05). The only group in which these two ratios were significantly correlated with waist-to-hip ratio was MetS group (r=0.332 and r=0.334, p≤0.01, respectively). Lack of correlation between the D2 index and the triglycerides-to-HDL-C ratio was another important finding in MetS group. In this study, parameters and ratios, whose associations were defined previously with increased cardiovascular risk or cardiac death have been evaluated along with obesity indices in children with morbid obesity and MetS. Their profiles during childhood have been investigated. Aside from the nature of the correlation between the D2 index and triglycerides-to-HDL-C ratio, total cholesterol-to-HDL-C as well as LDL-C-to- HDL-C ratios along with their correlations with waist-to-hip ratio showed that the combination of obesity-related parameters predicts better than one parameter and appears to be helpful for discriminating MO children from MetS group.

Keywords: children, lipid ratios, metabolic syndrome, obesity indices

Procedia PDF Downloads 158
309 Importance of Remote Sensing and Information Communication Technology to Improve Climate Resilience in Low Land of Ethiopia

Authors: Hasen Keder Edris, Ryuji Matsunaga, Toshi Yamanaka

Abstract:

The issue of climate change and its impact is a major contemporary global concern. Ethiopia is one of the countries experiencing adverse climate change impact including frequent extreme weather events that are exacerbating drought and water scarcity. Due to this reason, the government of Ethiopia develops a strategic document which focuses on the climate resilience green economy. One of the major components of the strategic framework is designed to improve community adaptation capacity and mitigation of drought. For effective implementation of the strategy, identification of regions relative vulnerability to drought is vital. There is a growing tendency of applying Geographic Information System (GIS) and Remote Sensing technologies for collecting information on duration and severity of drought by direct measure of the topography as well as an indirect measure of land cover. This study aims to show an application of remote sensing technology and GIS for developing drought vulnerability index by taking lowland of Ethiopia as a case study. In addition, it assesses integrated Information Communication Technology (ICT) potential of Ethiopia lowland and proposes integrated solution. Satellite data is used to detect the beginning of the drought. The severity of drought risk prone areas of livestock keeping pastoral is analyzed through normalized difference vegetation index (NDVI) and ten years rainfall data. The change from the existing and average SPOT NDVI and vegetation condition index is used to identify the onset of drought and potential risks. Secondary data is used to analyze geographical coverage of mobile and internet usage in the region. For decades, the government of Ethiopia introduced some technologies and approach to overcoming climate change related problems. However, lack of access to information and inadequate technical support for the pastoral area remains a major challenge. In conventional business as usual approach, the lowland pastorals continue facing a number of challenges. The result indicated that 80% of the region face frequent drought occurrence and out of this 60% of pastoral area faces high drought risk. On the other hand, the target area mobile phone and internet coverage is rapidly growing. One of identified ICT solution enabler technology is telecom center which covers 98% of the region. It was possible to identify the frequently affected area and potential drought risk using the NDVI remote-sensing data analyses. We also found that ICT can play an important role in mitigating climate change challenge. Hence, there is a need to strengthen implementation efforts of climate change adaptation through integrated Remote Sensing and web based information dissemination and mobile alert of extreme events.

Keywords: climate changes, ICT, pastoral, remote sensing

Procedia PDF Downloads 315
308 Facilitating the Learning Environment as a Servant Leader: Empowering Self-Directed Student Learning

Authors: Thomas James Bell III

Abstract:

Pedagogy is thought of as one's philosophy, theory, or teaching method. This study examines the science of learning, considering the forced reconsideration of effective pedagogy brought on by the aftermath of the 2020 coronavirus pandemic. With the aid of various technologies, online education holds challenges and promises to enhance the learning environment if implemented to facilitate student learning. Behaviorism centers around the belief that the instructor is the sage on the classroom stage using repetition techniques as the primary learning instrument. This approach to pedagogy ascribes complete control of the learning environment and works best for students to learn by allowing students to answer questions with immediate feedback. Such structured learning reinforcement tends to guide students' learning without considering learners' independence and individual reasoning. And such activities may inadvertently stifle the student's ability to develop critical thinking and self-expression skills. Fundamentally liberationism pedagogy dismisses the concept that education is merely about students learning things and more about the way students learn. Alternatively, the liberationist approach democratizes the classroom by redefining the role of the teacher and student. The teacher is no longer viewed as the sage on the stage but as a guide on the side. Instead, this approach views students as creators of knowledge and not empty vessels to be filled with knowledge. Moreover, students are well suited to decide how best to learn and which areas improvements are needed. This study will explore the classroom instructor as a servant leader in the twenty-first century, which allows students to integrate technology that encapsulates more individual learning styles. The researcher will examine the Professional Scrum Master (PSM I) exam pass rate results of 124 students in six sections of an Agile scrum course. The students will be separated into two groups; the first group will follow a structured instructor-led course outlined by a course syllabus. The second group will consist of several small teams (ten or fewer) of self-led and self-empowered students. The teams will conduct several event meetings that include sprint planning meetings, daily scrums, sprint reviews, and retrospective meetings throughout the semester will the instructor facilitating the teams' activities as needed. The methodology for this study will use the compare means t-test to compare the mean of an exam pass rate in one group to the mean of the second group. A one-tailed test (i.e., less than or greater than) will be used with the null hypothesis, for the difference between the groups in the population will be set to zero. The major findings will expand the pedagogical approach that suggests pedagogy primarily exist in support of teacher-led learning, which has formed the pillars of traditional classroom teaching. But in light of the fourth industrial revolution, there is a fusion of learning platforms across the digital, physical, and biological worlds with disruptive technological advancements in areas such as the Internet of Things (IoT), artificial intelligence (AI), 3D printing, robotics, and others.

Keywords: pedagogy, behaviorism, liberationism, flipping the classroom, servant leader instructor, agile scrum in education

Procedia PDF Downloads 142
307 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression

Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin

Abstract:

This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.

Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression

Procedia PDF Downloads 290
306 The Effectiveness of Insider Mediation for Sustainable Peace: A Case Study in Mindanao, the Philippines

Authors: Miyoko Taniguchi

Abstract:

Conflict and violence have prevailed over the last four decades in conflict-affected areas in Muslim Mindanao, despite the signing of several peace agreements between the Philippine government and Islamic separatist insurgents (the Moro National Liberation Front (MNLF) and the Moro Islamic Liberation Front (MILF)), and peacebuilding activities on the ground. In the meantime, the peace talks had been facilitated and mediated by international actors such as the Organization of Islamic Cooperation (OIC) and its member countries such as Indonesia, and Malaysia, and Japan. In 2014, both the Government of the Philippines (GPH) and the MILF finally reached a Comprehensive Peace Agreement (CAB) in 2014 under the Aquino III administration, though a Bangsamoro Basic Law (BBL) based on the CAB was not enacted at the Catholic-majority of the Philippine Congress. After a long process of deliberations at the Congress, Republic Act 11054, known as the Bangsamoro Organic Law (BOL), was enacted in 2018 under the Duterate administration. In the beginning, President Duterte adopted an 'inclusive approach' that involves the MILF, all factions of the MNLF, non-Islamized indigenous peoples, and other influential clan leaders to align all peace processes under a single Bangsamoro peace process. A notable difference from past administrations, there is an explicit recognition of all agreements and legislations based on the rights of each stakeholder. This created a new identity as 'Bangsamoro', the residents of Muslim Mindanao, enhancing political legitimacy. Besides, it should be noted an important role of 'insider mediators' -a platform for the Bangsamoro from diverse sectors attempting to work within their respective organizations in Moro society. Give the above background, this paper aims at probing the effectiveness of insider mediation as one of the alternative approaches for mediation in the peace process. For the objectives, this research uses qualitative methods such as process-tracing and semi-structured interviews from diverse groups of stakeholders at from the state to the regional level, including the government officials involved in peace process under the Presidential Office, rebels (MILF and MNLF), civil society organizations involved in lobbying and facilitating peace process, especially in the legislative process. The key outcomes and findings are that the Insider Mediators Group, formed in 2016, had taken on a significant role in facilitating the achievement of a wider consensus among stakeholders on major Moro issues such as BBL’s passing during the last administration to call for unity among the Bangsamoro. Most of its members are well-educated professionals affiliated with the MILF, the MNLF, and influential clans. One of the group’s biggest achievements has been the lobbying and provision of legal advice to legislators who were not necessarily knowledgeable about the peace process during the deliberation of the bicameral conference of the BBL, which eventually led to its passage. It can be concluded that in the long run, strengthening vertical and horizontal relations between the Moro society and the State and among the Moro peoples that can be viewed as a means to sustainable peace.

Keywords: insider mediation, Mindanao, peace process, Moro Islamic liberation front

Procedia PDF Downloads 119
305 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst

Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra

Abstract:

Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.

Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery

Procedia PDF Downloads 268
304 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN

Authors: Mohamed Gaafar, Evan Davies

Abstract:

Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.

Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN

Procedia PDF Downloads 297
303 Effectiveness of Dry Needling with and without Ultrasound Guidance in Patients with Knee Osteoarthritis and Patellofemoral Pain Syndrome: A Systematic Review and Meta-Analysis

Authors: Johnson C. Y. Pang, Amy S. N. Fu, Ryan K. L. Lee, Allan C. L. Fu

Abstract:

Dry needling (DN) is one of the puncturing methods that involves the insertion of needles into the tender spots of the human body without the injection of any substance. DN has long been used to treat the patient with knee pain caused by knee osteoarthritis (KOA) and patellofemoral pain syndrome (PFPS), but the effectiveness is still inconsistent. This study aimed to conduct a systematic review and meta-analysis to assess the intervention methods and effects of DN with and without ultrasound guidance for treating pain and dysfunctions in people with KOA and PFPS. Design: This systematic review adhered to the PRISMA reporting guidelines. The registration number of the study protocol published in the PROSPERO database was CRD42021221419. Six electronic databases were searched manually through CINAHL Complete (1976-2020), Cochrane Library (1996-2020), EMBASE (1947-2020), Medline (1946-2020), PubMed (1966-2020), and Psychinfo (1806-2020) in November 2020. Randomized controlled trials (RCTs) and controlled clinical trials were included to examine the effects of DN on knee pain, including KOA and PFPS. The key concepts included were: DN, acupuncture, ultrasound guidance, KOA, and PFPS. Risk of bias assessment and qualitative analysis were conducted by two independent reviewers using the PEDro score. Results: Fourteen articles met the inclusion criteria, and eight of them were high-quality papers in accordance with the PEDro score. There were variations in the techniques of DN. These included the direction, depth of insertion, number of needles, duration of stay, needle manipulation, and the number of treatment sessions. Meta-analysis was conducted on eight articles. DN group showed positive short-term effects (from immediate after DN to less than 3 months) on pain reduction for both KOA and PFPS with the overall standardized mean difference (SMD) of -1.549 (95% CI=-0.588 to -2.511); with great heterogeneity (P=0.002, I²=96.3%). In subgroup analysis, DN demonstrated significant effects in pain reduction on PFPS (p < 0.001) that could not be found in subjects with KOA (P=0.302). At 3-month post-intervention, DN also induced significant pain reduction in both subjects with KOA and PFPS with an overall SMD of -0.916 (95% CI=-0.133 to -1.699, and great heterogeneity (P=0.022, I²=95.63%). Besides, DN induced significant short-term improvement in function with the overall SMD=6.069; 95% CI=8.595 to 3.544; with great heterogeneity (P<0.001, I²=98.56%) when analyzed was conducted on both KOA and PFPS groups. In subgroup analysis, only PFPS showed a positive result with SMD=6.089, P<0.001; while KOA showed statistically insignificant with P=0.198 in short-term effect. Similarly, at 3-month post-intervention, significant improvement in function after DN was found when the analysis was conducted in both groups with the overall SMD=5.840; 95% CI=9.252 to 2.428; with great heterogeneity (P<0.001, I²=99.1%), but only PFPS showed significant improvement in sub-group analysis (P=0.002, I²=99.1%). Conclusions: The application of DN in KOA and PFPS patients varies among practitioners. DN is effective in reducing pain and dysfunction at short-term and 3-month post-intervention in individuals with PFPS. To our best knowledge, no study has reported the effects of DN with ultrasound guidance on KOA and PFPS. The longer-term effects of DN on KOA and PFPS are waiting for further study.

Keywords: dry needling, knee osteoarthritis, patellofemoral pain syndrome, ultrasound guidance

Procedia PDF Downloads 134
302 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 147
301 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna

Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov

Abstract:

This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.

Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna

Procedia PDF Downloads 283
300 The Effectiveness of Multiphase Flow in Well- Control Operations

Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia

Abstract:

Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.

Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic

Procedia PDF Downloads 118
299 Isolation and Characterization of a Narrow-Host Range Aeromonas hydrophila Lytic Bacteriophage

Authors: Sumeet Rai, Anuj Tyagi, B. T. Naveen Kumar, Shubhkaramjeet Kaur, Niraj K. Singh

Abstract:

Since their discovery, indiscriminate use of antibiotics in human, veterinary and aquaculture systems has resulted in global emergence/spread of multidrug-resistant bacterial pathogens. Thus, the need for alternative approaches to control bacterial infections has become utmost important. High selectivity/specificity of bacteriophages (phages) permits the targeting of specific bacteria without affecting the desirable flora. In this study, a lytic phage (Ahp1) specific to Aeromonas hydrophila subsp. hydrophila was isolated from finfish aquaculture pond. The host range of Ahp1 range was tested against 10 isolates of A. hydrophila, 7 isolates of A. veronii, 25 Vibrio cholerae isolates, 4 V. parahaemolyticus isolates and one isolate each of V. harveyi and Salmonella enterica collected previously. Except the host A. hydrophila subsp. hydrophila strain, no lytic activity against any other bacterial was detected. During the adsorption rate and one-step growth curve analysis, 69.7% of phage particles were able to get adsorbed on host cell followed by the release of 93 ± 6 phage progenies per host cell after a latent period of ~30 min. Phage nucleic acid was extracted by column purification methods. After determining the nature of phage nucleic acid as dsDNA, phage genome was subjected to next-generation sequencing by generating paired-end (PE, 2 x 300bp) reads on Illumina MiSeq system. De novo assembly of sequencing reads generated circular phage genome of 42,439 bp with G+C content of 58.95%. During open read frame (ORF) prediction and annotation, 22 ORFs (out of 49 total predicted ORFs) were functionally annotated and rest encoded for hypothetical proteins. Proteins involved in major functions such as phage structure formation and packaging, DNA replication and repair, DNA transcription and host cell lysis were encoded by the phage genome. The complete genome sequence of Ahp1 along with gene annotation was submitted to NCBI GenBank (accession number MF683623). Stability of Ahp1 preparations at storage temperatures of 4 °C, 30 °C, and 40 °C was studied over a period of 9 months. At 40 °C storage, phage counts declined by 4 log units within one month; with a total loss of viability after 2 months. At 30 °C temperature, phage preparation was stable for < 5 months. On the other hand, phage counts decreased by only 2 log units over a period of 9 during storage at 4 °C. As some of the phages have also been reported as glycerol sensitive, the stability of Ahp1 preparations in (0%, 15%, 30% and 45%) glycerol stocks were also studied during storage at -80 °C over a period of 9 months. The phage counts decreased only by 2 log units during storage, and no significant difference in phage counts was observed at different concentrations of glycerol. The Ahp1 phage discovered in our study had a very narrow host range and it may be useful for phage typing applications. Moreover, the endolysin and holin genes in Ahp1 genome could be ideal candidates for recombinant cloning and expression of antimicrobial proteins.

Keywords: Aeromonas hydrophila, endolysin, phage, narrow host range

Procedia PDF Downloads 162
298 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale

Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal

Abstract:

Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.

Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable

Procedia PDF Downloads 302
297 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 173
296 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model

Authors: Muhammad Karim Ahmadzai

Abstract:

Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.

Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis

Procedia PDF Downloads 117
295 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 174
294 Indigenous Children Doing Better through Mother Tongue Based Early Childhood Care and Development Center in Chittagong Hill Tracts, Bangladesh

Authors: Meherun Nahar

Abstract:

Background:The Chittagong Hill Tracts (CHT) is one of the most diverse regions in Bangladesh in terms of geography, ethnicity, culture and traditions of the people and home of thirteen indigenous ethnic people. In Bangladesh indigenous children aged 6-10 years remain out of school, and the majority of those who do enroll drop out before completing primary school. According to different study that the dropout rate of indigenous children is much higher than the estimated national rate, children dropping out especially in the early years of primary school. One of the most critical barriers for these children is that they do not understand the national language in the government pre-primary school. And also their school readiness and development become slower. In this situation, indigenous children excluded from the mainstream quality education. To address this issue Save the children in Bangladesh and other organizations are implementing community-based Mother Tongue-Based Multilingual Education program (MTBMLE) in the Chittagong Hill Tracts (CHT) for improving the enrolment rate in Government Primary Schools (GPS) reducing dropout rate as well as quality education. In connection with that Save the children conducted comparative research in Chittagong hill tracts on children readiness through Mother tongue-based and Non-mother tongue ECCD center. Objectives of the Study To assess Mother Language based ECCD centers and Non-Mother language based ECCD centers children’s school readiness and development. To assess the community perception over Mother Language based and Non-Mother Language based ECCD center. Methodology: The methodology of the study was FGD, KII, In-depth Interview and observation. Both qualitative and quantitative research methods were followed. The quantitative part has three components, School Readiness, Classroom observation and Headteacher interview and qualitative part followed FGD technique. Findings: The interviews with children under school readiness component showed that in general, Mother Language (ML) based ECCD children doing noticeably better in all four areas (Knowledge, numeracy, fine motor skill and communication) than their peers from Non-mother language based children. ML students seem to be far better skilled in concepts about print as most of them could identify cover and title of the book that was shown to them. They could also know from where to begin to read the book or could correctly point the letter that was read. A big difference was found in the area of identifying letters as 89.3% ML students of could identify letters correctly whereas for Non mother language 30% could do the same. The class room observation data shows that ML children are more active and remained engaged in the classroom than NML students. Also, teachers of ML appeared to have more engaged in explaining issues relating to general knowledge or leading children in rhyming/singing other than telling something from text books. The participants of FGDs were very enthusiastic on using mother language as medium of teaching in pre-schools. They opined that this initiative elates children to attend school and enables them to continue primary schooling without facing any language barrier.

Keywords: Chittagong hill tracts, early childhood care and development (ECCD), indigenous, mother language

Procedia PDF Downloads 117
293 The Display of Age-Period/Age-Cohort Mortality Trends Using 1-Year Intervals Reveals Period and Cohort Effects Coincident with Major Influenza A Events

Authors: Maria Ines Azambuja

Abstract:

Graphic displays of Age-Period-Cohort (APC) mortality trends generally uses data aggregated within 5 or 10-year intervals. Technology allows one to increase the amount of processed data. Displaying occurrences by 1-year intervals is a logic first step in the direction of attaining higher quality landscapes of variations in temporal occurrences. Method: 1) Comparison of UK mortality trends plotted by 10-, 5- and 1-year intervals; 2) Comparison of UK and US mortality trends (period X age and cohort X age) displayed by 1-year intervals. Source: Mortality data (period, 1x1, males, 1933-1912) uploaded from the Human Mortality Database to Excel files, where Period X Age and Cohort X Age graphics were produced. The choice of transforming age-specific trends from calendar to birth-cohort years (cohort = period – age) (instead of using cohort 1x1 data available at the HMD resource) was taken to facilitate the comparison of age-specific trends when looking across calendar-years and birth-cohorts. Yearly live births, males, 1933 to 1912 (UK) were uploaded from the HFD. Influenza references are from the literature. Results: 1) The use of 1-year intervals unveiled previously unsuspected period, cohort and interacting period x cohort effects upon all-causes mortality. 2) The UK and US figures showed variations associated with particular calendar years (1936, 1940, 1951, 1957-68, 72) and, most surprisingly, with particular birth-cohorts (1889-90 in the US, and 1900, 1918-19, 1940-41 and 1946-47, in both countries. Also, the figures showed ups and downs in age-specific trends initiated at particular birth-cohorts (1900, 1918-19 and 1947-48) or a particular calendar-year (1968, 1972, 1977-78 in the US), variations at times restricted to just a range of ages (cohort x period interacting effects). Importantly, most of the identified “scars” (period and cohort) correlates with the record of occurrences of Influenza A epidemics since the late 19th Century. Conclusions: The use of 1-year intervals to describe APC mortality trends both increases the amount of information available, thus enhancing the opportunities for patterns’ recognition, and increases our capability of interpreting those patterns by describing trends across smaller intervals of time (period or birth-cohort). The US and the UK mortality landscapes share many but not all 'scars' and distortions suggested here to be associated with influenza epidemics. Different size-effects of wars are evident, both in mortality and in fertility. But it would also be realistic to suppose that the preponderant influenza A viruses circulating in UK and US at the beginning of the 20th Century might be different and the difference to have intergenerational long-term consequences. Compared with the live births trend (UK data), birth-cohort scars clearly depend on birth-cohort sizes relatives to neighbor ones, which, if causally associated with influenza, would result from influenza-related fetal outcomes/selection. Fetal selection could introduce continuing modifications on population patterns of immune-inflammatory phenotypes that might give rise to 'epidemic constitutions' favoring the occurrence of particular diseases. Comparative analysis of mortality landscapes may help us to straight our record of past circulation of Influenza viruses and document associations between influenza recycling and fertility changes.

Keywords: age-period-cohort trends, epidemic constitution, fertility, influenza, mortality

Procedia PDF Downloads 230
292 Abdominal Exercises Can Modify Abdominal Function in Postpartum Women: A Randomized Control Trial Comparing Curl-up to Drawing-in Combined With Diaphragmatic Aspiration

Authors: Yollande Sènan Djivoh, Dominique de Jaeger

Abstract:

Background: Abdominal exercises are commonly practised nowadays. Specific techniques of abdominal muscles strengthening like hypopressive exercises have recently emerged and their practice is encouraged against the practice of Curl-up especially in postpartum. The acute and the training effects of these exercises did not allow to advise one exercise to the detriment of another. However, physiotherapists remain reluctant to perform Curl-up with postpartum women because of its potential harmful effect on the pelvic floor. Design: This study was a randomized control trial registered under the number PACTR202110679363984. Objective: to observe the training effect of two experimental protocols (Curl-up versus Drawing-in+Diaphragmatic aspiration) on the abdominal wall (interrecti distance, rectus and transversus abdominis thickness, abdominal strength) in Beninese postpartum women. Pelvic floor function (tone, endurance, urinary incontinence) will be assessed to evaluate potential side effects of exercises on the pelvic floor. Method: Postpartum women diagnosed with diastasis recti were randomly assigned to one of three groups (Curl-up, Drawingin+Diaphragmatic aspiration and control). Abdominal and pelvic floor parameters were assessed before and at the end of the 6-week protocol. The interrecti distance and the abdominal muscles thickness were assessed by ultrasound and abdominal strength by dynamometer. Pelvic floor tone and strength were assessed with Biofeedback and urinary incontinence was quantified by pad test. To compare the results between the three groups and the two measurements, a two-way Anova test with repeated measures was used (p<0.05). When interaction was significant, a posthoc using Student t test, with Bonferroni correction, was used to compare the three groups regarding the difference (end value minus initial value). To complete these results, a paired Student t test was used to compare in each group the initial and end values. Results: Fifty-eight women participated in this study, divided in three groups with similar characteristics regarding their age (29±5 years), parity (2±1 children), BMI (26±4 kg/m2 ), time since the last birth (10±2 weeks), weight of their baby at birth (330±50 grams). Time effect and interaction were significant (p<0.001) for all abdominal parameters. Experimental groups improved more than control group. Curl-up group improved more (p=0.001) than Drawing-in+Diaphragmatic aspiration group regarding the interrecti distance (9.3±4.2 mm versus 6.6±4.6 mm) and abdominal strength (20.4±16.4 Newton versus 11.4±12.8 Newton). Drawingin+Diaphragmatic aspiration group improved (0.8±0.7 mm) more than Curl-up group (0.5±0.7 mm) regarding the transversus abdominis thickness (p=0.001). Only Curl-up group improved (p<0.001) the rectus abdominis thickness (1.5±1.2 mm). For pelvic floor parameters, both experimental groups improved (p=0.01) except for tone which improved (p=0.03) only in Drawing-in+Diaphragmatic aspiration group from 19.9±4.1 cmH2O to 22.2±4.5 cmH2O. Conclusion: Curl-up was more efficient to improve abdominal function than Drawingin+Diaphragmatic aspiration. However, these exercises are complementary. None of them degraded the pelvic floor, but Drawing-in+Diaphragmatic aspiration improved further the pelvic floor function. Clinical implications: Curl-up, Drawing-in and Diaphragmatic aspiration can be used for the management of abdominal function in postpartum women. Exercises must be chosen considering the specific needs of each woman’s abdominal and pelvic floor function.

Keywords: curl-up, drawing-in, diaphragmatic aspiration, hypopressive exercise, postpartum women

Procedia PDF Downloads 82
291 Assessment of Environmental Mercury Contamination from an Old Mercury Processing Plant 'Thor Chemicals' in Cato Ridge, KwaZulu-Natal, South Africa

Authors: Yohana Fessehazion

Abstract:

Mercury is a prominent example of a heavy metal contaminant in the environment, and it has been extensively investigated for its potential health risk in humans and other organisms. In South Africa, massive mercury contamination happened in1980s when the England-based mercury reclamation processing plant relocated to Cato Ridge, KwaZulu-Natal Province, and discharged mercury waste into the Mngceweni River. This mercury waste discharge resulted in high mercury concentration that exceeded the acceptable levels in Mngceweni River, Umgeni River, and human hair of the nearby villagers. This environmental issue raised the alarm, and over the years, several environmental assessments were reported the dire environmental crises resulting from the Thor Chemicals (now known as Metallica Chemicals) and urged the immediate removal of the around 3,000 tons of mercury waste stored in the factory storage facility over two decades. Recently theft of some containers with the toxic substance from the Thor Chemicals warehouse and the subsequent fire that ravaged the facility furtherly put the factory on the spot escalating the urgency of left behind deadly mercury waste removal. This project aims to investigate the mercury contamination leaking from an old Thor Chemicals mercury processing plant. The focus will be on sediments, water, terrestrial plants, and aquatic weeds such as the prominent water hyacinth weeds in the nearby water systems of Mngceweni River, Umgeni River, and Inanda Dam as a bio-indicator and phytoremediator for mercury pollution. Samples will be collected in spring around October when the condition is favourable for microbial activity to methylate mercury incorporated in sediments and blooming season for some aquatic weeds, particularly water hyacinth. Samples of soil, sediment, water, terrestrial plant, and aquatic weed will be collected per sample site from the point of source (Thor Chemicals), Mngceweni River, Umgeni River, and the Inanda Dam. One-way analysis of variance (ANOVA) tests will be conducted to determine any significant differences in the Hg concentration among all sampling sites, followed by Least Significant Difference post hoc test to determine if mercury contamination varies with the gradient distance from the source point of pollution. The flow injection atomic spectrometry (FIAS) analysis will also be used to compare the mercury sequestration between the different plant tissues (roots and stems). The principal component analysis is also envisaged for use to determine the relationship between the source of mercury pollution and any of the sampling points (Umgeni and Mngceweni Rivers and the Inanda Dam). All the Hg values will be expressed in µg/L or µg/g in order to compare the result with the previous studies and regulatory standards. Sediments are expected to have relatively higher levels of Hg compared to the soils, and aquatic macrophytes, water hyacinth weeds are expected to accumulate a higher concentration of mercury than terrestrial plants and crops.

Keywords: mercury, phytoremediation, Thor chemicals, water hyacinth

Procedia PDF Downloads 223
290 Population Diversity Studies in Dendrocalamus strictus Roxb. (Nees.) Through Morphological Parameters

Authors: Anugrah Tripathi, H. S. Ginwal, Charul Kainthola

Abstract:

Bamboos are considered as valuable resources which have the potential of meeting current economic, environmental and social needs. Bamboo has played a key role in humankind and its livelihood since ancient time. Distributed in diverse areas across the globe, bamboo makes an important natural resource for hundreds of millions of people across the world. In some of the Asian countries and northeast part of India, bamboo is the basis of life on many horizons. India possesses the largest bamboo-bearing area across the world and a great extent of species richness, but this rich genetic resource and its diversity have dwindled in the natural forest due to forest fire, over exploitation, lack of proper management policies, and gregarious flowering behavior. Bamboos which are well known for their peculiar, extraordinary morphology, show a lot of variation in many scales. Among the various bamboo species, Dendrocalamus strictus is the most abundant bamboo resource in India, which is a deciduous, solid, and densely tufted bamboo. This species can thrive in wide gradients of geographical as well as climatic conditions. Due to this, it exhibits a significant amount of variation among the populations of different origins for numerous morphological features. Morphological parameters are the front-line criteria for the selection and improvement of any forestry species. Study on the diversity among eight important morphological characters of D. strictus was carried out, covering 16 populations from wide geographical locations of India following INBAR standards. Among studied 16 populations, three populations viz. DS06 (Gaya, Bihar), DS15 (Mirzapur, Uttar Pradesh), and DS16 (Bhogpur, Pinjore, Haryana) were found as superior populations with higher mean values for parametric characters (clump height, no. of culms/ clump, circumference of clump, internode diameter and internode length) and with the higher sum of ranks in non-parametric characters (straightness, disease, and pest incidence and branching pattern). All of these parameters showed an ample amount of variations among the studied populations and revealed a significant difference among the populations. Variation in morphological characters is very common in a species having wide distribution and is usually evident at various levels, viz., between and within the populations. They are of paramount importance for growth, biomass, and quick production gains. Present study also gives an idea for the selection of the population on the basis of these morphological parameters. From this study on morphological parameters and their variation, we may find an overview of best-performing populations for growth and biomass accumulation. Some of the studied parameters also provide ideas to standardize mechanisms of selecting and sustainable harvesting of the clumps by applying simpler silvicultural systems so that they can be properly managed in homestead gardens for the community utilization as well as by commercial growers to meet the requirement of industries and other stakeholders.

Keywords: Dendrocalamus strictus, homestead garden, gregarious flowering, stakeholders, INBAR

Procedia PDF Downloads 76
289 Effects of Pulsed Electromagnetic and Static Magnetic Fields on Musculoskeletal Low Back Pain: A Systematic Review Approach

Authors: Mohammad Javaherian, Siamak Bashardoust Tajali, Monavvar Hadizadeh

Abstract:

Objective: This systematic review study was conducted to evaluate the effects of Pulsed Electromagnetic (PEMF) and Static Magnetic Fields (SMG) on pain relief and functional improvement in patients with musculoskeletal Low Back Pain (LBP). Methods: Seven electronic databases were searched by two researchers independently to identify the published Randomized Controlled Trials (RCTs) on the efficacy of pulsed electromagnetic, static magnetic, and therapeutic nuclear magnetic fields. The identified databases for systematic search were Ovid Medline®, Ovid Cochrane RCTs and Reviews, PubMed, Web of Science, Cochrane Library, CINAHL, and EMBASE from 1968 to February 2016. The relevant keywords were selected by Mesh. After initial search and finding relevant manuscripts, all references in selected studies were searched to identify second hand possible manuscripts. The published RCTs in English would be included to the study if they reported changes on pain and/or functional disability following application of magnetic fields on chronic musculoskeletal low back pain. All studies with surgical patients, patients with pelvic pain, and combination of other treatment techniques such as acupuncture or diathermy were excluded. The identified studies were critically appraised and the data were extracted independently by two raters (M.J and S.B.T). Probable disagreements were resolved through discussion between raters. Results: In total, 1505 abstracts were found following the initial electronic search. The abstracts were reviewed to identify potentially relevant manuscripts. Seventeen possibly appropriate studies were retrieved in full-text of which 48 were excluded after reviewing their full-texts. Ten selected articles were categorized into three subgroups: PEMF (6 articles), SMF (3 articles), and therapeutic nuclear magnetic fields (tNMF) (1 article). Since one study evaluated tNMF, we had to exclude it. In the PEMF group, one study of acute LBP did not show significant positive results and the majority of the other five studies on Chronic Low Back Pain (CLBP) indicated its efficacy on pain relief and functional improvement, but one study with the lowest sessions (6 sessions during 2 weeks) did not report a significant difference between treatment and control groups. In the SMF subgroup, two articles reported near significant pain reduction without any functional improvement although more studies are needed. Conclusion: The PEMFs with a strength of 5 to 150 G or 0.1 to 0.3 G and a frequency of 5 to 64 Hz or sweep 7 to 7KHz can be considered as an effective modality in pain relief and functional improvement in patients with chronic low back pain, but there is not enough evidence to confirm their effectiveness in acute low back pain. To achieve the appropriate effectiveness, it is suggested to perform this treatment modality 20 minutes per day for at least 9 sessions. SMFs have not been reported to be substantially effective in decreasing pain or improving the function in chronic low back pain. More studies are necessary to achieve more reliable results.

Keywords: pulsed electromagnetic field, static magnetic field, magnetotherapy, low back pain

Procedia PDF Downloads 205
288 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty

Abstract:

The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.

Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal

Procedia PDF Downloads 170
287 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)

Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri

Abstract:

Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.

Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase

Procedia PDF Downloads 230
286 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 107
285 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 159
284 The Model of Open Cooperativism: The Case of Open Food Network

Authors: Vangelis Papadimitropoulos

Abstract:

This paper is part of the research program “Techno-Social Innovation in the Collaborative Economy”, funded by the Hellenic Foundation for Research and Innovation (H.F.R.I.) for the years 2022-2024. The paper showcases the Open Food Network (OFN) as an open-sourced digital platform supporting short food supply chains in local agricultural production and consumption. The paper outlines the research hypothesis, the theoretical framework, and the methodology of research as well as the findings and conclusions. Research hypothesis: The model of open cooperativism as a vehicle for systemic change in the agricultural sector. Theoretical framework: The research reviews the OFN as an illustrative case study of the three-zoned model of open cooperativism. The OFN is considered a paradigmatic case of the model of open cooperativism inasmuch as it produces commons, it consists of multiple stakeholders including ethical market entities, and it is variously supported by local authorities across the globe, the latter prefiguring the mini role of a partner state. Methodology: Research employs Ernesto Laclau and Chantal Mouffe’s discourse analysis -elements, floating signifiers, nodal points, discourses, logics of equivalence and difference- to analyse the breadth of empirical data gathered through literature review, digital ethnography, a survey, and in-depth interviews with core OFN members. Discourse analysis classifies OFN floating signifiers, nodal points, and discourses into four themes: value proposition, governance, economic policy, and legal policy. Findings: OFN floating signifiers align around the following nodal points and discourses: “digital commons”, “short food supply chains”, “sustainability”, “local”, “the elimination of intermediaries” and “systemic change”. The current research identifies a lack of common ground of what the discourse of “systemic change” signifies on the premises of the OFN’s value proposition. The lack of a common mission may be detrimental to the formation of a common strategy that would be perhaps deemed necessary to bring about systemic change in agriculture. Conclusions: Drawing on Laclau and Mouffe’s discourse theory of hegemony, research introduces a chain of equivalence by aligning discourses such as “agro-ecology”, “commons-based peer production”, “partner state” and “ethical market entities” under the model of open cooperativism, juxtaposed against the current hegemony of neoliberalism, which articulates discourses such as “market fundamentalism”, “privatization”, “green growth” and “the capitalist state” to promote corporatism and entrepreneurship. Research makes the case that for OFN to further agroecology and challenge the current hegemony of industrial agriculture, it is vital that it opens up its supply chains into equivalent sectors of the economy, civil society, and politics to form a chain of equivalence linking together ethical market entities, the commons and a partner state around the model of open cooperativism.

Keywords: sustainability, the digital commons, open cooperativism, innovation

Procedia PDF Downloads 72
283 The Effect of Intimate Partner Violence Prevention Program on Knowledge and Attitude of Victims

Authors: Marzieh Nojomi, Azadeh Mottaghi, Arghavan Haj-Sheykholeslami, Narjes Khalili, Arash Tehrani Banihashemi

Abstract:

Background and objectives: Domestic violence is a global problem with severe consequences throughout the life of the victims. Iran’s Ministry of Health has launched an intimate partner violence (IPV) prevention program, integrated in the primary health care services since 2016. The present study is a part of this national program’s evaluation. In this section, we aimed to examine spousal abuse victims’ knowledge and attitude towards domestic violence before and after receivingthese services. Methods: To assess the knowledge and attitudes of victims, a questionnaire designed by Ahmadzadand colleagues in 2013 was used. This questionnaire includes 15 questions regarding knowledge in the fields of definition, epidemiology, and effects on children, outcomes, and prevention of domestic violence. To assess the attitudes, this questionnaire has 10 questions regarding the attitudes toward the causes, effects, and legal or protective support services of domestic violence. To assess the satisfaction and the effect of the program on prevention or reduction of spousal violence episodes, two more questions were also added. Since domestic violence prevalence differs in different parts of the country, we chose nine areas with the highest, the lowest, and moderate prevalence of IPVfor the study. The link to final electronic version of the questionnaire was sent to the randomly selected public rural or urban health centers in the nine chosen areas. Since the study had to be completed in one month, we used newly identified victims as pre-intervention group and people who had at least received one related service from the program (like psychiatric consultation, education about safety measures, supporting organizations and etc.) during the previous year, as our post- intervention group. Results: A hundred and ninety-two newly identified IPV victims and 267 victims who had at least received one related program service during the previous year entered the study. All of the victims were female. Basic characteristics of the two groups, including age, education, occupation, addiction, spouses’ age, spouses’ addiction, duration of the current marriage, and number of children, were not statistically different. In knowledge questions, post- intervention group had statistically better scores in the fields of domestic violence outcomes and its effects on children; however, in the remaining areas, the scores of both groups were similar. The only significant difference in the attitude across the two groups was in the field of legal or protective support services. From the 267 women who had ever received a service from the program, 91.8% were satisfied with the services, and 74% reported a decrease in the number of violent episodes. Conclusion: National IPV prevention program integrated in the primary health care services in Iran is effective in improving the knowledge of victims about domestic violence outcomes and its effects on children. Improving the attitude and knowledge of domestic violence victims about its causes and preventive measures needs more effective interventions. This program can reduce the number of IPV episodes between the spouses, and satisfaction among the service users is high.

Keywords: intimate partner violence, assessment, health services, efficacy

Procedia PDF Downloads 134
282 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 190