Search results for: user generated content
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11071

Search results for: user generated content

1171 Vertical and Horizantal Distribution Patterns of Major and Trace Elements: Surface and Subsurface Sediments of Endhorheic Lake Acigol Basin, Denizli Turkey

Authors: M. Budakoglu, M. Karaman

Abstract:

Lake Acıgöl is located in area with limited influences from urban and industrial pollution sources, there is nevertheless a need to understand all potential lithological and anthropogenic sources of priority contaminants in this closed basin. This study discusses vertical and horizontal distribution pattern of major, trace elements of recent lake sediments to better understand their current geochemical analog with lithological units in the Lake Acıgöl basin. This study also provides reliable background levels for the region by the detailed surfaced lithological units data. The detail results of surface, subsurface and shallow core sediments from these relatively unperturbed ecosystems, highlight its importance as conservation area, despite the high-scale industrial salt production activity. While P2O5/TiO2 versus MgO/CaO classification diagram indicate magmatic and sedimentary origin of lake sediment, Log(SiO2/Al2O3) versus Log(Na2O/K2O) classification diagrams express lithological assemblages of shale, iron-shale, vacke and arkose. The plot between TiO2 vs. SiO2 and P2O5/TiO2 vs. MgO/CaO also supports the origin of the primary magma source. The average compositions of the 20 different lithological units used as a proxy for geochemical background in the study area. As expected from weathered rock materials, there is a large variation in the major element content for all analyzed lake samples. The A-CN-K and A-CNK-FM ternary diagrams were used to deduce weathering trends. Surface and subsurface sediments display an intense weathering history according to these ternary diagrams. The most of the sediments samples plot around UCC and TTG, suggesting a low to moderate weathering history for the provenance. The sediments plot in a region clearly suggesting relative similar contents in Al2O3, CaO, Na2O, and K2O from those of lithological samples.

Keywords: Lake Acıgöl, recent lake sediment, geochemical speciation of major and trace elements, heavy metals, Denizli, Turkey

Procedia PDF Downloads 407
1170 Microbial Inoculants to Increase the Biomass and Nutrient Uptake of Tithonia Cultivated as Hedgerow Plants to Control Erosion in Ultisols

Authors: Nurhajati Hakim, Kiki Amalia, A. Agustian, H. Hermansah, Y. Yulnafatmawita

Abstract:

Ultisols require greater amounts of fertilizer application compared to other soils and susceptible to erosion. Unfortunately, the price of synthetic fertilizers has increased over time during the years, making them unaffordable for most Indonesian farmers. While terrace technique to control erosion very costly.Over the last century, efforts to reduce reliance on synthetic agro-chemicals fertilizers and erosion control have recently focused on Tithonia diversifolia as a fertilizer alternative, and as hedgerow plant to control erosion. Generally known by its common name of tree marigold or Mexican sunflower, this plant has attracted considerable attention for its prolific production of green biomass, rich in nitrogen, phosphorous and potassium (NPK). In pot experiments has founded some microbial such as Mycorrhizal, Azotobacter, Azospirillum, phosphate solubilizing bacterial (PSB) and fungi (PSF) are expected to play an important role in biomass production and high nutrient uptake of this plant. This issue of importance was pursued further in the following investigation in field condition. The aim of this study was to determine the type of microbial combination suitable for Tithonia cultivation as hedgerow plants in Ultisols which have higher biomass production and nutrient content, and decline soil erosion. The field experiment was conducted with 6 treatments in a randomized block design (RBD) using 3 replications. The treatments were: Tithonia rhizosphere without microbial inoculated (A); Inokulanted by Mycorrhizal + Azotobacter + Azospirillium (B); Mycorrhizal + PSF (C); Mycorrhizal + PSB(D); Mycorrhizal + PSB + PSF(E);and without hedgerow Tithonia (F).The microbial substrates were inoculated into the Tithonia rhizosphere in the nursery. The young Tithonia plants were then planted as hedgerow on Ultisols in the experimental field for 8 months, and pruned once every 2 months. Soil erosion were collected every rainy time. The differences between treatments were statistically significant by HSD test at the 95% level of probability. The result showed that treatment C (mycorrhizal + PSB) was the most effective, and followed by treatment D (mycorrhizal + PSF) in producing higher Tithonia biomass about 8 t dry matter 2000 m-2 ha-1 y-1 and declined soil erosion 71-75%.

Keywords: hedgerow tithonia, microbial inoculants, organic fertilizer, soil erosion control

Procedia PDF Downloads 354
1169 Study on Aerosol Behavior in Piping Assembly under Varying Flow Conditions

Authors: Anubhav Kumar Dwivedi, Arshad Khan, S. N. Tripathi, Manish Joshi, Gaurav Mishra, Dinesh Nath, Naveen Tiwari, B. K. Sapra

Abstract:

In a nuclear reactor accident scenario, a large number of fission products may release to the piping system of the primary heat transport. The released fission products, mostly in the form of the aerosol, get deposited on the inner surface of the piping system mainly due to gravitational settling and thermophoretic deposition. The removal processes in the complex piping system are controlled to a large extent by the thermal-hydraulic conditions like temperature, pressure, and flow rates. These parameters generally vary with time and therefore must be carefully monitored to predict the aerosol behavior in the piping system. The removal process of aerosol depends on the size of particles that determines how many particles get deposit or travel across the bends and reach to the other end of the piping system. The released aerosol gets deposited onto the inner surface of the piping system by various mechanisms like gravitational settling, Brownian diffusion, thermophoretic deposition, and by other deposition mechanisms. To quantify the correct estimate of deposition, the identification and understanding of the aforementioned deposition mechanisms are of great importance. These mechanisms are significantly affected by different flow and thermodynamic conditions. Thermophoresis also plays a significant role in particle deposition. In the present study, a series of experiments were performed in the piping system of the National Aerosol Test Facility (NATF), BARC using metal aerosols (zinc) in dry environments to study the spatial distribution of particles mass and number concentration, and their depletion due to various removal mechanisms in the piping system. The experiments were performed at two different carrier gas flow rates. The commercial CFD software FLUENT is used to determine the distribution of temperature, velocity, pressure, and turbulence quantities in the piping system. In addition to the in-built models for turbulence, heat transfer and flow in the commercial CFD code (FLUENT), a new sub-model PBM (population balance model) is used to describe the coagulation process and to compute the number concentration along with the size distribution at different sections of the piping. In the sub-model coagulation kernels are incorporated through user-defined function (UDF). The experimental results are compared with the CFD modeled results. It is found that most of the Zn particles (more than 35 %) deposit near the inlet of the plenum chamber and a low deposition is obtained in piping sections. The MMAD decreases along the length of the test assembly, which shows that large particles get deposited or removed in the course of flow, and only fine particles travel to the end of the piping system. The effect of a bend is also observed, and it is found that the relative loss in mass concentration at bends is more in case of a high flow rate. The simulation results show that the thermophoresis and depositional effects are more dominating for the small and larger sizes as compared to the intermediate particles size. Both SEM and XRD analysis of the collected samples show the samples are highly agglomerated non-spherical and composed mainly of ZnO. The coupled model framed in this work could be used as an important tool for predicting size distribution and concentration of some other aerosol released during a reactor accident scenario.

Keywords: aerosol, CFD, deposition, coagulation

Procedia PDF Downloads 138
1168 A Comparative Analysis of Innovation Maturity Models: Towards the Development of a Technology Management Maturity Model

Authors: Nikolett Deutsch, Éva Pintér, Péter Bagó, Miklós Hetényi

Abstract:

Strategic technology management has emerged and evolved parallelly with strategic management paradigms. It focuses on the opportunity for organizations operating mainly in technology-intensive industries to explore and exploit technological capabilities upon which competitive advantage can be obtained. As strategic technology management involves multifunction within an organization, requires broad and diversified knowledge, and must be developed and implemented with business objectives to enable a firm’s profitability and growth, excellence in strategic technology management provides unique opportunities for organizations in terms of building a successful future. Accordingly, a framework supporting the evaluation of the technological readiness level of management can significantly contribute to developing organizational competitiveness through a better understanding of strategic-level capabilities and deficiencies in operations. In the last decade, several innovation maturity assessment models have appeared and become designated management tools that can serve as references for future practical approaches expected to be used by corporate leaders, strategists, and technology managers to understand and manage technological capabilities and capacities. The aim of this paper is to provide a comprehensive review of the state-of-the-art innovation maturity frameworks, to investigate the critical lessons learned from their application, to identify the similarities and differences among the models, and identify the main aspects and elements valid for the field and critical functions of technology management. To this end, a systematic literature review was carried out considering the relevant papers and articles published in highly ranked international journals around the 27 most widely known innovation maturity models from four relevant digital sources. Key findings suggest that despite the diversity of the given models, there is still room for improvement regarding the common understanding of innovation typologies, the full coverage of innovation capabilities, and the generalist approach to the validation and practical applicability of the structure and content of the models. Furthermore, the paper proposes an initial structure by considering the maturity assessment of the technological capacities and capabilities - i.e., technology identification, technology selection, technology acquisition, technology exploitation, and technology protection - covered by strategic technology management.

Keywords: innovation capabilities, innovation maturity models, technology audit, technology management, technology management maturity models

Procedia PDF Downloads 56
1167 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques

Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu

Abstract:

Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.

Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare

Procedia PDF Downloads 60
1166 Flood Early Warning and Management System

Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare

Abstract:

The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.

Keywords: flood, modeling, HPC, FOSS

Procedia PDF Downloads 88
1165 The Effect of 'Teachers Teaching Teachers' Professional Development Course on Teachers’ Achievement and Classroom Practices

Authors: Nuri Balta, Ali Eryilmaz

Abstract:

High-quality teachers are the key to improve student learning. Without a professional development of the teachers, the improvement of student success is difficult and incomplete. This study offers an in-service training course model for professional development of teachers (PD) entitled "teachers teaching teachers" (TTT). The basic premise of the PD program, designed for this study, was primarily aimed to increase the subject matter knowledge of high school physics teachers. The TTT course (the three hour long workshops), organized for this study, lasted for seven weeks with seventeen teachers took part in the TTT program at different amounts. In this study, the effect of the TTT program on teachers’ knowledge improvement was searched through the modern physics unit (MPU). The participating teachers taught the unit to one of their grade ten classes earlier, and they taught another equivalent class two months later. They were observed in their classes both before and after TTT program. The teachers were divided into placebo and the treatment groups. The aim of Solomon four-group design is an attempt to eliminate the possible effect of pre-test. However, in this study the similar design was used to eliminate the effect of pre teaching. The placebo group teachers taught their both classes as regular and the treatment group teachers had TTT program between the two teachings. The class observation results showed that the TTT program increased teachers’ knowledge and skills in teaching MPU. Further, participating in the TTT program caused teachers to teach the MPU in accordance with the requirements of the curriculum. In order to see any change in participating teachers’ success, an achievement test was applied to them. A large effect size (dCohen=.93) was calculated for the effect of TTT program on treatment group teachers’ achievement. The results suggest that staff developers should consider including topics, attractive to teachers, in-service training programs (a) to help teachers’ practice teaching the new topics (b) to increase the participation rate. During the conduction of the TTT courses, it was observed that teachers could not end some discussions and explain some concepts. It is now clear that teachers need support, especially when discussing counterintuitive concepts such as modern physics concepts. For this reason it is recommended that content focused PD programs be conducted at the helm of a scholarly coach.

Keywords: high school physics, in-service training course, modern physics unit, teacher professional development

Procedia PDF Downloads 192
1164 The Rise and Effects of Social Movement on Ethnic Relations in Malaysia: The Bersih Movement as a Case Study

Authors: Nur Rafeeda Daut

Abstract:

The significance of this paper is to provide an insight on the role of social movement in building stronger ethnic relations in Malaysia. In particular, it focuses on how the BERSIH movement have been able to bring together the different ethnic groups in Malaysia to resist the present political administration that is seen to manipulate the electoral process and oppress the basic freedom of expression of Malaysians. Attention is given on how and why this group emerged and its mobilisation strategies. Malaysia which is a multi-ethnic and multi-religious society gained its independence from the British in 1957. Like many other new nations, it faces the challenges of nation building and governance. From economic issues to racial and religious tension, Malaysia is experiencing high level of corruption and income disparity among the different ethnic groups. The political parties in Malaysia are also divided along ethnic lines. BERSIH which is translated as ‘clean’ is a movement which seeks to reform the current electoral system in Malaysia to ensure equality, justice, free and fair elections. It was originally formed in 2007 as a joint committee that comprised leaders from political parties, civil society groups and NGOs. In April 2010, the coalition developed as an entirely civil society movement unaffiliated to any political party. BERSIH claimed that the electoral roll in Malaysia has been marred by fraud and other irregularities. In 2015, the BERSIH movement organised its biggest rally in Malaysia which also includes 38 other rallies held internationally. Supporters of BERSIH that participated in the demonstration were comprised of all the different ethnic groups in Malaysia. In this paper, two social movement theories are used: resource mobilization theory and political opportunity structure to explain the emergence and mobilization of the BERSIH movement in Malaysia. Based on these two theories, corruption which is believed to have contributed to the income disparity among Malaysians has generated the development of this movement. The rise of re-islamisation values propagated by certain groups in Malaysia and the shift in political leadership has also created political opportunities for this movement to emerge. In line with the political opportunity structure theory, the BERSIH movement will continue to create more opportunities for the empowerment of civil society and the unity of ethnic relations in Malaysia. Comparison is made on the degree of ethnic unity in the country before and after BERSIH was formed. This would include analysing the level of re-islamisation values and also the level of corruption in relation to economic income under the premiership of the former Prime Minister Mahathir and the present Prime Minister Najib Razak. The country has never seen such uprisings like BERSIH where ethnic groups which over the years have been divided by ethnic based political parties and economic disparity joined together with a common goal for equality and fair elections. As such, the BERSIH movement is a unique case where it illustrates the change of political landscape, ethnic relations and civil society in Malaysia.

Keywords: ethnic relations, Malaysia, political opportunity structure, resource mobilization theory and social movement

Procedia PDF Downloads 342
1163 Mobile Marketing Adoption in Pakistan

Authors: Manzoor Ahmad

Abstract:

The rapid advancement of mobile technology has transformed the way businesses engage with consumers, making mobile marketing a crucial strategy for organizations worldwide. This paper presents a comprehensive study on the adoption of mobile marketing in Pakistan, aiming to provide valuable insights into the current landscape, challenges, and opportunities in this emerging market. To achieve this objective, a mixed-methods approach was employed, combining quantitative surveys and qualitative interviews with industry experts, marketers, and consumers. The study encompassed a diverse range of sectors, including retail, telecommunications, banking, and e-commerce, ensuring a comprehensive understanding of mobile marketing practices across different industries. The findings indicate that mobile marketing has gained significant traction in Pakistan, with a growing number of organizations recognizing its potential for reaching and engaging with consumers effectively. Factors such as increasing smartphone penetration, affordable data plans, and the rise of social media usage have contributed to the widespread adoption of mobile marketing strategies. However, several challenges and barriers to mobile marketing adoption were identified. These include issues related to data privacy and security, limited digital literacy among consumers, inadequate infrastructure, and cultural considerations. Additionally, the study highlights the need for tailored and localized mobile marketing strategies to address the diverse cultural and linguistic landscape of Pakistan. Based on the insights gained from the study, practical recommendations are provided to support organizations in optimizing their mobile marketing efforts in Pakistan. These recommendations encompass areas such as consumer targeting, content localization, mobile app development, personalized messaging, and measurement of mobile marketing effectiveness. This research contributes to the existing literature on mobile marketing adoption in developing countries and specifically sheds light on the unique dynamics of the Pakistani market. It serves as a valuable resource for marketers, practitioners, and policymakers seeking to leverage mobile marketing strategies in Pakistan, ultimately fostering the growth and success of businesses operating in this region.

Keywords: mobile marketing, digital marketing, mobile advertising, adoption of mobile marketing

Procedia PDF Downloads 104
1162 Phytochemicals Quatification, Trace Metal Accumulation Pattern and Contamination Risk Assessment of Different Variety of Tomatoes Cultivated on Municipal Waste Sludge Treated Soil

Authors: Mathodzi Nditsheni, Olawole Emmanuel Aina, Joshua Oluwole Olowoyo

Abstract:

The ever-increasing world population is putting extreme pressure on the already limited agricultural resources for food production. Different soil enhancers were introduced by famers to meet the need of the ever-increasing population demand for food. One of the soil enhancers is the municipal waste sludge. This research investigated the differences in the concentrations of trace metals and levels of phytochemicals in four different tomato varieties cultivated on soil treated with municipal waste sludge in Pretoria, South Africa. Fruits were harvested at maturity and analyzed for trace metals and phytochemicals contents using Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES) and a High-Performance Liquid Chromatography (HPLC) respectively. A one-way analysis of variance (ANOVA) was used to determine the differences in the concentrations of trace metals and phytochemical from different tomato varieties were significant. From the study, Rodade tomato bioaccumulated the highest concentrations of Mn, Cr, Cu and Ni, Roma bioaccumulated the highest concentrations of, Cd, Fe and Pb while Heinz bioaccumulated the highest concentrations of As and Zn. Cherry tomato on the other hand, recorded the lowest concentrations for most metals, Cd, Cr, Cu, Mn, Ni, Pb and Zn. The results of the study further showed that phenolic and flavonoids content were higher in the Solanum lycopersicum fruit grown in soils treated with municipal waste sludge. The study also showed that there was an inverse relationship between the levels of trace metals and phytochemicals. The calculated contamination factor values of trace metals like Cr, Cu, Pb and Zn were above the safe value of 1 which indicated that the tomato fruits may be unsafe for human consumption. However, the contamination factor values for the remaining trace metals were well below the safe value of 1. From the results obtained either for the control group or the treatment, the tomato varieties used in the study, bioaccumulated the toxic trace metals in their fruits and some of the values obtained were higher than the acceptable limit, which may then imply that the varieties of tomato used in this study bio accumulated the toxic trace metals from the soil, hence care should be taken when these tomato varieties are either cultivated or harvested from polluted areas

Keywords: trace metals, flavonoids, phenolics, waste sludge, tomato, contamination factors

Procedia PDF Downloads 70
1161 Enhanced Performance of Supercapacitor Based on Boric Acid Doped Polyvinyl Alcohol-H₂SO₄ Gel Polymer Electrolyte System

Authors: Hamide Aydin, Banu Karaman, Ayhan Bozkurt, Umran Kurtan

Abstract:

Recently, Proton Conducting Gel Polymer Electrolytes (GPEs) have drawn much attention in supercapacitor applications due to their physical and electrochemical characteristics and stability conditions for low temperatures. In this research, PVA-H2SO4-H3BO3 GPE has been used for electric-double layer capacitor (EDLCs) application, in which electrospun free-standing carbon nanofibers are used as electrodes. Introduced PVA-H2SO4-H3BO3 GPE behaves as both separator and the electrolyte in the supercapacitor. Symmetric Swagelok cells including GPEs were assembled via using two electrode arrangements and the electrochemical properties were searched. Electrochemical performance studies demonstrated that PVA-H2SO4-H3BO3 GPE had a maximum specific capacitance (Cs) of 134 F g-1 and showed great capacitance retention (%100) after 1000 charge/discharge cycles. Furthermore, PVA-H2SO4-H3BO3 GPE yielded an energy density of 67 Wh kg-1 with a corresponding power density of 1000 W kg-1 at a current density of 1 A g-1. PVA-H2SO4 based polymer electrolyte was produced according to following procedure; Firstly, 1 g of commercial PVA was dissolved in distilled water at 90°C and stirred until getting transparent solution. This was followed by addition of the diluted H2SO4 (1 g of H2SO4 in a distilled water) to the solution to obtain PVA-H2SO4. PVA-H2SO4-H3BO3 based polymer electrolyte was produced by dissolving H3BO3 in hot distilled water and then inserted into the PVA-H2SO4 solution. The mole fraction was arranged to ¼ of the PVA repeating unit. After the stirring 2 h at RT, gel polymer electrolytes were obtained. The final electrolytes for supercapacitor testing included 20% of water in weight. Several blending combinations of PVA/H2SO4 and H3BO3 were studied to observe the optimized combination in terms of conductivity as well as electrolyte stability. As the amount of boric acid increased in the matrix, excess sulfuric acid was excluded due to cross linking, especially at lower solvent content. This resulted in the reduction of proton conductivity. Therefore, the mole fraction of H3BO3 was chosen as ¼ of PVA repeating unit. Within this optimized limits, the polymer electrolytes showed better conductivities as well as stability.

Keywords: electrical double layer capacitor, energy density, gel polymer electrolyte, ultracapacitor

Procedia PDF Downloads 221
1160 Military Families’ Attachment to the Royal Guards Community of Dusit District, Bangkok Metropolitan

Authors: Kanikanun Photchong, Phusit Phukamchanoad

Abstract:

The objective of this research is to study the people’s level of participation in activities of the community, their satisfaction towards the community, the attachment they have to the community, factors that influence the attachment, as well as the characteristics of the relationships of military families’ of the Royal Guards community of Dusit District. The method used was non-probability sampling by quota sampling according to people’s age. The determined age group was 18 years or older. One set of a sample group was done per family. The questionnaires were conducted by 287 people. Snowball sampling was also used by interviewing people of the community, starting from the Royal Guards Community’s leader, then by 20 of the community’s well-respected persons. The data was analyzed by using descriptive statistics, such as arithmetic mean and standard deviation, as well as by inferential statistics, such as Independent - Samples T test (T-test), One-Way ANOVA (F-test), Chi-Square. Descriptive analysis according to the structure of the interview content was also used. The results of the research is that the participation of the population in the Royal Guards Community in various activities is at a medium level, with the average participation level during Mother’s and Father’s Days. The people’s general level of satisfaction towards the premises of the Royal Guards Community is at the highest level. The people were most satisfied with the transportation within the community and in contacting with people from outside the premises. The access to the community is convenient and there are various entrances. The attachment of the people to the Royal Guards Community in general and by each category is at a high level. The feeling that the community is their home rated the highest average. Factors that influence the attachment of the people of the Royal Guards Community are age, status, profession, income, length of stay in the community, membership of social groups, having neighbors they feel close and familiar with, and as well as the benefits they receive from the community. In addition, it was found that people that participate in activities have a high level of positive relationship towards the attachment of the people to the Royal Guards Community. The satisfaction of the community has a very high level of positive relationship with the attachment of the people to the Royal Guards Community. The characteristics of the attachment of military families’ is that they live in big houses that everyone has to protect and care for, starting from the leader of the family as well as all members. Therefore, they all love the community they live in. The characteristics that show the participation of activities within the community and the high level of satisfaction towards the premises of the community will enable the people to be more attached to the community. The people feel that everyone is close neighbors within the community, as if they are one big family.

Keywords: community attachment, community satisfaction, royal guards community, activities of the community

Procedia PDF Downloads 365
1159 Procedure for Monitoring the Process of Behavior of Thermal Cracking in Concrete Gravity Dams: A Case Study

Authors: Adriana de Paula Lacerda Santos, Bruna Godke, Mauro Lacerda Santos Filho

Abstract:

Several dams in the world have already collapsed, causing environmental, social and economic damage. The concern to avoid future disasters has stimulated the creation of a great number of laws and rules in many countries. In Brazil, Law 12.334/2010 was created, which establishes the National Policy on Dam Safety. Overall, this policy requires the dam owners to invest in the maintenance of their structures and to improve its monitoring systems in order to provide faster and straightforward responses in the case of an increase of risks. As monitoring tools, visual inspections has provides comprehensive assessment of the structures performance, while auscultation’s instrumentation has added specific information on operational or behavioral changes, providing an alarm when a performance indicator exceeds the acceptable limits. These limits can be set using statistical methods based on the relationship between instruments measures and other variables, such as reservoir level, time of the year or others instruments measuring. Besides the design parameters (uplift of the foundation, displacements, etc.) the dam instrumentation can also be used to monitor the behavior of defects and damage manifestations. Specifically in concrete gravity dams, one of the main causes for the appearance of cracks, are the concrete volumetric changes generated by the thermal origin phenomena, which are associated with the construction process of these structures. Based on this, the goal of this research is to propose a monitoring process of the thermal cracking behavior in concrete gravity dams, through the instrumentation data analysis and the establishment of control values. Therefore, as a case study was selected the Block B-11 of José Richa Governor Dam Power Plant, that presents a cracking process, which was identified even before filling the reservoir in August’ 1998, and where crack meters and surface thermometers were installed for its monitoring. Although these instruments were installed in May 2004, the research was restricted to study the last 4.5 years (June 2010 to November 2014), when all the instruments were calibrated and producing reliable data. The adopted method is based on simple linear correlations procedures to understand the interactions among the instruments time series, verifying the response times between them. The scatter plots were drafted from the best correlations, which supported the definition of the limit control values. Among the conclusions, it is shown that there is a strong or very strong correlation between ambient temperature and the crack meters and flowmeters measurements. Based on the results of the statistical analysis, it was possible to develop a tool for monitoring the behavior of the case study cracks. Thus it was fulfilled the goal of the research to develop a proposal for a monitoring process of the behavior of thermal cracking in concrete gravity dams.

Keywords: concrete gravity dam, dams safety, instrumentation, simple linear correlation

Procedia PDF Downloads 288
1158 Impact of Pharmacist-Led Care on Glycaemic Control in Patients with Type 2 Diabetes: A Randomised-Controlled Trial

Authors: Emmanuel A. David, Rebecca O. Soremekun, Roseline I. Aderemi-Williams

Abstract:

Background: The complexities involved in the management of diabetes mellitus require a multi-dimensional, multi-professional collaborative and continuous care by health care providers and a substantial self-care by the patients in order to achieve desired treatment outcomes. The effect of pharmacists’ care in the management of diabetes in resource-endowed nations is well documented in literature, but randomised-controlled assessment of the impact of pharmacist-led care among patients with diabetes in resource-limited settings like Nigeria and sub-Saharan Africa countries is scarce. Objective: To evaluate the impact of Pharmacist-led care on glycaemic control in patients with uncontrolled type 2 diabetes, using a randomised-controlled study design Methods: This study employed a prospective randomised controlled design, to assess the impact of pharmacist-led care on glycaemic control of 108 poorly controlled type 2 diabetic patients. A total of 200 clinically diagnosed type 2 diabetes patients were purposively selected using fasting blood glucose ≥ 7mmol/L and tested for long term glucose control using Glycated haemoglobin measure. One hundred and eight (108) patients with ≥ 7% Glycated haemoglobin were recruited for the study and assigned unique identification numbers. They were further randomly allocated to intervention and usual care groups using computer generated random numbers, with each group containing 54 subjects. Patients in the intervention group received pharmacist-structured intervention, including education, periodic phone calls, adherence counselling, referral and 6 months follow-up, while patients in usual care group only kept clinic appointments with their physicians. Data collected at baseline and six months included socio-demographic characteristics, fasting blood glucose, Glycated haemoglobin, blood pressure, lipid profile. With an intention to treat analysis, Mann-Whitney U test was used to compared median change from baseline in the primary outcome (Glycated haemoglobin) and secondary outcomes measure, effect size was computed and proportion of patients that reached target laboratory parameter were compared in both arms. Results: All enrolled participants (108) completed the study, 54 in each study. Mean age was 51±11.75 and majority were female (68.5%). Intervention patients had significant reduction in Glycated haemoglobin (-0.75%; P<0.001; η2 = 0.144), with greater proportion attaining target laboratory parameter after 6 months of care compared to usual care group (Glycated haemoglobin: 42.6% vs 20.8%; P=0.02). Furthermore, patients who received pharmacist-led care were about 3 times more likely to have better glucose control (AOR 2.718, 95%CI: 1.143-6.461) compared to usual care group. Conclusion: Pharmacist-led care significantly improved glucose control in patients with uncontrolled type 2 diabetes mellitus and should be integrated in the routine management of diabetes patients, especially in resource-limited settings.

Keywords: glycaemic control , pharmacist-led care, randomised-controlled trial , type 2 diabetes mellitus

Procedia PDF Downloads 117
1157 Hansen Solubility Parameters, Quality by Design Tool for Developing Green Nanoemulsion to Eliminate Sulfamethoxazole from Contaminated Water

Authors: Afzal Hussain, Mohammad A. Altamimi, Syed Sarim Imam, Mudassar Shahid, Osamah Abdulrahman Alnemer

Abstract:

Exhaustive application of sulfamethoxazole (SUX) became as a global threat for human health due to water contamination through diverse sources. The addressed combined application of Hansen solubility (HSPiP software) parameters and Quality by Design tool for developing various green nanoemulsions. HSPiP program assisted to screen suitable excipients based on Hansen solubility parameters and experimental solubility data. Various green nanoemulsions were prepared and characterized for globular size, size distribution, zeta potential, and removal efficiency. Design Expert (DoE) software further helped to identify critical factors responsible to have direct impact on percent removal efficiency, size, and viscosity. Morphological investigation was visualized under transmission electron microscopy (TEM). Finally, the treated was studied to negate the presence of the tested drug employing ICP-OES (inductively coupled plasma optical emission microscopy) technique and HPLC (high performance liquid chromatography). Results showed that HSPiP predicted biocompatible lipid, safe surfactant (lecithin), and propylene glycol (PG). Experimental solubility of the drug in the predicted excipients were quite convincing and vindicated. Various green nanoemulsions were fabricated, and these were evaluated for in vitro findings. Globular size (100-300 nm), PDI (0.1-0.5), zeta potential (~ 25 mV), and removal efficiency (%RE = 70-98%) were found to be in acceptable range for deciding input factors with level in DoE. Experimental design tool assisted to identify the most critical variables controlling %RE and optimized content of nanoemulsion under set constraints. Dispersion time was varied from 5-30 min. Finally, ICP-OES and HPLC techniques corroborated the absence of SUX in the treated water. Thus, the strategy is simple, economic, selective, and efficient.

Keywords: quality by design, sulfamethoxazole, green nanoemulsion, water treatment, icp-oes, hansen program (hspip software

Procedia PDF Downloads 78
1156 Impact of Displacements Durations and Monetary Costs on the Labour Market within a City Consisting on Four Areas a Theoretical Approach

Authors: Aboulkacem El Mehdi

Abstract:

We develop a theoretical model at the crossroads of labour and urban economics, used for explaining the mechanism through which the duration of home-workplace trips and their monetary costs impact the labour demand and supply in a spatially scattered labour market and how they are impacted by a change in passenger transport infrastructures and services. The spatial disconnection between home and job opportunities is referred to as the spatial mismatch hypothesis (SMH). Its harmful impact on employment has been subject to numerous theoretical propositions. However, all the theoretical models proposed so far are patterned around the American context, which is particular as it is marked by racial discrimination against blacks in the housing and the labour markets. Therefore, it is only natural that most of these models are developed in order to reproduce a steady state characterized by agents carrying out their economic activities in a mono-centric city in which most unskilled jobs being created in the suburbs, far from the Blacks who dwell in the city-centre, generating a high unemployment rates for blacks, while the White population resides in the suburbs and has a low unemployment rate. Our model doesn't rely on any racial discrimination and doesn't aim at reproducing a steady state in which these stylized facts are replicated; it takes the main principle of the SMH -the spatial disconnection between homes and workplaces- as a starting point. One of the innovative aspects of the model consists in dealing with a SMH related issue at an aggregate level. We link the parameters of the passengers transport system to employment in the whole area of a city. We consider here a city that consists of four areas: two of them are residential areas with unemployed workers, the other two host firms looking for labour force. The workers compare the indirect utility of working in each area with the utility of unemployment and choose between submitting an application for the job that generate the highest indirect utility or not submitting. This arbitration takes account of the monetary and the time expenditures generated by the trips between the residency areas and the working areas. Each of these expenditures is clearly and explicitly formulated so that the impact of each of them can be studied separately than the impact of the other. The first findings show that the unemployed workers living in an area benefiting from good transport infrastructures and services have a better chance to prefer activity to unemployment and are more likely to supply a higher 'quantity' of labour than those who live in an area where the transport infrastructures and services are poorer. We also show that the firms located in the most accessible area receive much more applications and are more likely to hire the workers who provide the highest quantity of labour than the firms located in the less accessible area. Currently, we are working on the matching process between firms and job seekers and on how the equilibrium between the labour demand and supply occurs.

Keywords: labour market, passenger transport infrastructure, spatial mismatch hypothesis, urban economics

Procedia PDF Downloads 289
1155 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach

Authors: Jiaxin Chen

Abstract:

Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.

Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification

Procedia PDF Downloads 90
1154 Numerical Investigation of Turbulent Flow Control by Suction and Injection on a Subsonic NACA23012 Airfoil by Proper Orthogonal Decomposition Analysis and Perturbed Reynolds Averaged Navier‐Stokes Equations

Authors: Azam Zare

Abstract:

Separation flow control for performance enhancement over airfoils at high incidence angle has become an increasingly important topic. This work details the characteristics of an efficient feedback control of the turbulent subsonic flow over NACA23012 airfoil using forced reduced‐order model based on the proper orthogonal decomposition/Galerkin projection and perturbation method on the compressible Reynolds Averaged Navier‐Stokes equations. The forced reduced‐order model is used in the optimal control of the turbulent separated flow over a NACA23012 airfoil at Mach number of 0.2, Reynolds number of 5×106, and high incidence angle of 24° using blowing/suction controlling jets. The Spallart-Almaras turbulence model is implemented for high Reynolds number calculations. The main shortcoming of the POD/Galerkin projection on flow equations for controlling purposes is that the blowing/suction controlling jet velocity does not show up explicitly in the resulting reduced order model. Combining perturbation method and POD/Galerkin projection on flow equations introduce a forced reduced‐order model that can predict the time-varying influence of the blowing/suction controlling jet velocity. An optimal control theory based on forced reduced‐order system is used to design a control law for a nonlinear reduced‐order model, which attempts to minimize the vorticity content in the turbulent flow field over NACA23012 airfoil. Numerical simulations were performed to help understand the behavior of the controlled suction jet at 12% to 18% chord from leading edge and a pair of blowing/suction jets at 15% to 18% and 24% to 30% chord from leading edge, respectively. Analysis of streamline profiles indicates that the blowing/suction jets are efficient in removing separation bubbles and increasing the lift coefficient up to 22%, while the perturbation method can predict the flow field in an accurate Manner.

Keywords: flow control, POD, Galerkin projection, separation

Procedia PDF Downloads 147
1153 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 317
1152 Using Group Concept Mapping to Identify a Pharmacy-Based Trigger Tool to Detect Adverse Drug Events

Authors: Rodchares Hanrinth, Theerapong Srisil, Peeraya Sriphong, Pawich Paktipat

Abstract:

The trigger tool is the low-cost, low-tech method to detect adverse events through clues called triggers. The Institute for Healthcare Improvement (IHI) has developed the Global Trigger Tool for measuring and preventing adverse events. However, this tool is not specific for detecting adverse drug events. The pharmacy-based trigger tool is needed to detect adverse drug events (ADEs). Group concept mapping is an effective method for conceptualizing various ideas from diverse stakeholders. This technique was used to identify a pharmacy-based trigger to detect adverse drug events (ADEs). The aim of this study was to involve the pharmacists in conceptualizing, developing, and prioritizing a feasible trigger tool to detect adverse drug events in a provincial hospital, the northeastern part of Thailand. The study was conducted during the 6-month period between April 1 and September 30, 2017. Study participants involved 20 pharmacists (17 hospital pharmacists and 3 pharmacy lecturers) engaging in three concept mapping workshops. In this meeting, the concept mapping technique created by Trochim, a highly constructed qualitative group technic for idea generating and sharing, was used to produce and construct participants' views on what triggers were potential to detect ADEs. During the workshops, participants (n = 20) were asked to individually rate the feasibility and potentiality of each trigger and to group them into relevant categories to enable multidimensional scaling and hierarchical cluster analysis. The outputs of analysis included the trigger list, cluster list, point map, point rating map, cluster map, and cluster rating map. The three workshops together resulted in 21 different triggers that were structured in a framework forming 5 clusters: drug allergy, drugs induced diseases, dosage adjustment in renal diseases, potassium concerning, and drug overdose. The first cluster is drug allergy such as the doctor’s orders for dexamethasone injection combined with chlorpheniramine injection. Later, the diagnosis of drug-induced hepatitis in a patient taking anti-tuberculosis drugs is one trigger in the ‘drugs induced diseases’ cluster. Then, for the third cluster, the doctor’s orders for enalapril combined with ibuprofen in a patient with chronic kidney disease is the example of a trigger. The doctor’s orders for digoxin in a patient with hypokalemia is a trigger in a cluster. Finally, the doctor’s orders for naloxone with narcotic overdose was classified as a trigger in a cluster. This study generated triggers that are similar to some of IHI Global trigger tool, especially in the medication module such as drug allergy and drug overdose. However, there are some specific aspects of this tool, including drug-induced diseases, dosage adjustment in renal diseases, and potassium concerning which do not contain in any trigger tools. The pharmacy-based trigger tool is suitable for pharmacists in hospitals to detect potential adverse drug events using clues of triggers.

Keywords: adverse drug events, concept mapping, hospital, pharmacy-based trigger tool

Procedia PDF Downloads 160
1151 The Impacts of Foreign Culture on Yoruba Crime Films

Authors: Alonge Isaac Olusola

Abstract:

This paper focuses on the evolution and development of Yoruba theatre during the pre-colonial, colonial and post-colonial years and how Yoruba crime films have been influenced by foreign culture. It emphasizes on the transition of theatre from the ground to the stage and from the stage to the screen with emphasis on the contribution of late Chief Hubert Ogunde who is regarded as the doyen of Yoruba and the entire Nigerian theatre. Using the Theory of Post-colonialism, two modern Yoruba crime films are carefully selected from the numerous available ones to highlight and explain the various aspects of Yoruba films that have been greatly influenced by the foreign cultural practices. The questions to be answered here include 'Which attitudes or cultural practices are widely believed to be that of Yoruba?', 'To what extent are they projected in the selected Yoruba crime films?', 'Which attitudes or cultural practices are widely believed to be foreign among the Yoruba people?', 'To what extent are they projected in the selected Yoruba crime films?'. Although, the British colonial masters granted political independence to Nigeria on October 1, 1960, but a seed of multi-culture and counterculture had been sown into the lives of the Yoruba people. Under the literature review, there is an intensive illumination on some scholars’ ideas and views on what constitutes Yoruba culture, the evolution and development of drama, theatre and films in the Yoruba society and the nature of criminals and criminalities in the Yoruba society and the western world in the pre-colonial and post-colonial times. Furthermore, the processes of interaction between man, his values and his thoughts are also highlighted – a situation that procreates criminal or benevolent acts. Consequently, the paper dwells on how colonialism, despite its so-called merits put the gradual process of urbanization and civilization among the originally rustic, cohesive and moralistic Yoruba society on a supersonic speed that culminated in acquisition of attitudes that are alien to the Yoruba culture. Since a drama is nothing but the theatrical replication of what occurs in the real life, the paper then focuses on the submission that Yoruba crime films have experienced a serious foreign influence in form and content as a result of this encounter. In conclusion, the findings of the impact of foreign cultural practices on Yoruba crime films are highlighted and expatiated with a view to recommending a few steps that could be taken to retain the projection of the original Yoruba cultural practices in Yoruba films, especially the ones that have crime as a theme.

Keywords: culture, films, theatre, Yoruba

Procedia PDF Downloads 301
1150 Design and Biomechanical Analysis of a Transtibial Prosthesis for Cyclists of the Colombian Team Paralympic

Authors: Jhonnatan Eduardo Zamudio Palacios, Oscar Leonardo Mosquera Dussan, Daniel Guzman Perez, Daniel Alfonso Botero Rosas, Oscar Fabian Rubiano Espinosa, Jose Antonio Garcia Torres, Ivan Dario Chavarro, Ivan Ramiro Rodriguez Camacho, Jaime Orlando Rodriguez

Abstract:

The training of cilsitas with some type of disability finds in the technological development an indispensable ally, generating every day advances to contribute to the quality of life allowing to maximize the capacities of the athletes. The performance of a cyclist depends on physiological and biomechanical factors, such as aerodynamic profile, bicycle measurements, connecting rod length, pedaling systems, type of competition, among others. This study particularly focuses on the description of the dynamic model of a transtibial prosthesis for Paralympic cyclists. To make the model, two points are chosen: in the radius centers of rotation of the plate and pinion of the track bicycle. The parametric scheme of the track bike represents a model of 6 degrees of freedom due to the displacement in X - Y of each of the reference points of the angles of the curve profile β, cant of the velodrome α and the angle of rotation of the connecting rod φ. The force exerted on the crank of the bicycle varies according to the angles of the curve profile β, the velodrome cant of α and the angle of rotation of the crank φ. The behavior is analyzed through the Matlab R2015a software. The average strength that a cyclist exerts on the cranks of a bicycle is 1,607.1 N, the Paralympic cyclist must perform a force on each crank about 803.6 N. Once the maximum force associated with the movement has been determined, it is continued to the dynamic modeling of the transtibial prosthesis that represents a model of 6 degrees of freedom with displacement in X - Y in relation to the angles of rotation of the hip π, knee γ and ankle λ. Subsequently, an analysis of the kinematic behavior of the prosthesis was carried out by means of SolidWorks 2017 and Matlab R2015a, which was used to model and analyze the variation of the hip angles π, knee γ and ankle of the λ prosthesis. The reaction forces generated in the prosthesis were performed on the ankle of the prosthesis, performing the summation of forces on the X and Y axes. The same analysis was then applied to the tibia of the prosthesis and the socket. The reaction force of the parts of the prosthesis varies according to the hip angles π, knee γ and ankle of the prosthesis λ. Therefore, it can be deduced that the maximum forces experienced by the ankle of the prosthesis is 933.6 N on the X axis and 2.160.5 N on the Y axis. Finally, it is calculated that the maximum forces experienced by the tibia and the socket of the transtibial prosthesis in high performance competitions is 3.266 N on the X axis and 1.357 N on the Y axis. In conclusion, it can be said that the performance of the cyclist depends on several physiological factors, linked to biomechanics of training. The influence of biomechanical factors such as aerodynamics, bicycle measurements, connecting rod length, or non-circular pedaling systems on the cyclist performance.

Keywords: biomechanics, dynamic model, paralympic cyclist, transtibial prosthesis

Procedia PDF Downloads 334
1149 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 259
1148 Increase of Quinoa Tolerance to High Salinity Involves Agrophysiological Parameters Improvement by Soil Amendments

Authors: Bourhim Mohammad Redouane, Cheto Said, Qaddoury Ahmed, Hirich Abdelaziz, Ghoulam Cherki

Abstract:

Several abiotic stresses cause disruptions in the properties of agricultural soils and hence their loss worldwide. Among these abiotic stresses, Salinity to which most crops were exposed caused an important reduction in their productivity. Therefore, in order to deal with this challenging problem, we rely on cultivating alternative plants that can tolerate the adverse salinity stress, such as quinoa (Chenopodium quinoa). Although even it was qualified as tolerant to Salinity, the quinoa’s performance could be negatively affected under high salinity levels. Thus, our study aims to assess the effects of the application of soil amendments to improve quinoa tolerance levels under high Salinity. Thus, three quinoa varieties (Puno, ICBA-Q5, and Titicaca) were grown on agricultural soil under a greenhouse with five amendments; Biochar “Bc,” compost “Cp,” black soldier insect frass “If,” cow manure “Fb” and phosphogypsum “Pg.” Two controls without amendment were adopted consisting of the salinized negative one “T(-)” and the non-salinized positive one “T(+).” After 20 days from sowing, the plants were irrigated with a saline solution of 16 dS/m prepared with NaCl for a period of 60 days. Then plant tolerance was assessed based on agrophysiological parameters. The results showed that salinity stress negatively affected the quinoa plants for all the analyzed agrophysiological parameters in the three varieties compared to their corresponding controls “T(+).” However, most of these parameters were significantly enhanced by the application of soil amendments compared to their negative controls “T(-).” For instance, the biomass was improved by 91.8% and 69.4%, respectively, for Puno and Titicaca varieties amended with “Bc.” The total nitrogen amount was increased by 220% for Titicaca and ICBA-Q5 plants cultivated in the soil amended with “If.” One of the most important improvements was noted for potassium content in Titicaca amended with “Pg,” which was six times higher compared to the negative control. Besides, the plants of Puno amended with “Cp” showed an improvement of 75.9% for the stomatal conductance and 58.5% for nitrate reductase activity. Nevertheless, the pronounced varietal difference was registered between Puno and Titicaca, presenting the highest performances mainly for the soil amended with “If,” “Bc,” and “Pg.”

Keywords: chenopodium quinoa, salinity, soil amendments, growth, nutrients, nitrate reductase

Procedia PDF Downloads 69
1147 Antimicrobial and Anti-Biofilm Activity of Non-Thermal Plasma

Authors: Jan Masak, Eva Kvasnickova, Vladimir Scholtz, Olga Matatkova, Marketa Valkova, Alena Cejkova

Abstract:

Microbial colonization of medical instruments, catheters, implants, etc. is a serious problem in the spread of nosocomial infections. Biofilms exhibit enormous resistance to environment. The resistance of biofilm populations to antibiotic or biocides often increases by two to three orders of magnitude in comparison with suspension populations. Subjects of interests are substances or physical processes that primarily cause the destruction of biofilm, while the released cells can be killed by existing antibiotics. In addition, agents that do not have a strong lethal effect do not cause such a significant selection pressure to further enhance resistance. Non-thermal plasma (NTP) is defined as neutral, ionized gas composed of particles (photons, electrons, positive and negative ions, free radicals and excited or non-excited molecules) which are in permanent interaction. In this work, the effect of NTP generated by the cometary corona with a metallic grid on the formation and stability of biofilm and metabolic activity of cells in biofilm was studied. NTP was applied on biofilm populations of Staphylococcus epidermidis DBM 3179, Pseudomonas aeruginosa DBM 3081, DBM 3777, ATCC 15442 and ATCC 10145, Escherichia coli DBM 3125 and Candida albicans DBM 2164 grown on solid media on Petri dishes and on the titanium alloy (Ti6Al4V) surface used for the production joint replacements. Erythromycin (for S. epidermidis), polymyxin B (for E. coli and P. aeruginosa), amphotericin B (for C. albicans) and ceftazidime (for P. aeruginosa) were used to study the combined effect of NTP and antibiotics. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Fluorescence microscopy was applied to visualize the biofilm on the surface of the titanium alloy; SYTO 13 was used as a fluorescence probe to stain cells in the biofilm. It has been shown that biofilm populations of all studied microorganisms are very sensitive to the type of used NTP. The inhibition zone of biofilm recorded after 60 minutes exposure to NTP exceeded 20 cm², except P. aeruginosa DBM 3777 and ATCC 10145, where it was about 9 cm². Also metabolic activity of cells in biofilm differed for individual microbial strains. High sensitivity to NTP was observed in S. epidermidis, in which the metabolic activity of biofilm decreased after 30 minutes of NTP exposure to 15% and after 60 minutes to 1%. Conversely, the metabolic activity of cells of C. albicans decreased to 53% after 30 minutes of NTP exposure. Nevertheless, this result can be considered very good. Suitable combinations of exposure time of NTP and the concentration of antibiotic achieved in most cases a remarkable synergic effect on the reduction of the metabolic activity of the cells of the biofilm. For example, in the case of P. aeruginosa DBM 3777, a combination of 30 minutes of NTP with 1 mg/l of ceftazidime resulted in a decrease metabolic activity below 4%.

Keywords: anti-biofilm activity, antibiotic, non-thermal plasma, opportunistic pathogens

Procedia PDF Downloads 181
1146 Determination of Pesticides Residues in Tissue of Two Freshwater Fish Species by Modified QuEChERS Method

Authors: Iwona Cieślik, Władysław Migdał, Kinga Topolska, Ewa Cieślik

Abstract:

The consumption of fish is recommended as a means of preventing serious diseases, especially cardiovascular problems. Fish is known to be a valuable source of protein (rich in essential amino acids), unsaturated fatty acids, fat-soluble vitamins, macro- and microelements. However, it can also contain several contaminants (e.g. pesticides, heavy metals) that may pose considerable risks for humans. Among others, pesticide are of special concern. Their widespread use has resulted in the contamination of environmental compartments, including water. The occurrence of pesticides in the environment is a serious problem, due to their potential toxicity. Therefore, a systematic monitoring is needed. The aim of the study was to determine the organochlorine and organophosphate pesticide residues in fish muscle tissues of the pike (Esox lucius, L.) and the rainbow trout (Oncorhynchus mykkis, Walbaum) by a modified QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method, using Gas Chromatography Quadrupole Mass Spectrometry (GC/Q-MS), working in selected-ion monitoring (SIM) mode. The analysis of α-HCH, β-HCH, lindane, diazinon, disulfoton, δ-HCH, methyl parathion, heptachlor, malathion, aldrin, parathion, heptachlor epoxide, γ-chlordane, endosulfan, α-chlordane, o,p'-DDE, dieldrin, endrin, 4,4'-DDD, ethion, endrin aldehyde, endosulfan sulfate, 4,4'-DDT, and metoxychlor was performed in the samples collected in the Carp Valley (Malopolska region, Poland). The age of the pike (n=6) was 3 years and its weight was 2-3 kg, while the age of the rainbow trout (n=6) was 0.5 year and its weight was 0.5-1.0 kg. Detectable pesticide (HCH isomers, endosulfan isomers, DDT and its metabolites as well as metoxychlor) residues were present in fish samples. However, all these compounds were below the limit of quantification (LOQ). The other examined pesticide residues were below the limit of detection (LOD). Therefore, the levels of contamination were - in all cases - below the default Maximum Residue Levels (MRLs), established by Regulation (EC) No 396/2005 of the European Parliament and of the Council. The monitoring of pesticide residues content in fish is required to minimize potential adverse effects on the environment and human exposure to these contaminants.

Keywords: contaminants, fish, pesticides residues, QuEChERS method

Procedia PDF Downloads 214
1145 Closing the Loop between Building Sustainability and Stakeholder Engagement: Case Study of an Australian University

Authors: Karishma Kashyap, Subha D. Parida

Abstract:

Rapid population growth and urbanization is creating pressure throughout the world. This has a dramatic effect on a lot of elements which include water, food, transportation, energy, infrastructure etc. as few of the key services. Built environment sector is growing concurrently to meet the needs of urbanization. Due to such large scale development of buildings, there is a need for them to be monitored and managed efficiently. Along with appropriate management, climate adaptation is highly crucial as well because buildings are one of the major sources of greenhouse gas emission in their operation phase. Buildings to be adaptive need to provide a triple bottom approach to sustainability i.e., being socially, environmentally and economically sustainable. Hence, in order to deliver these sustainability outcomes, there is a growing understanding and thrive towards switching to green buildings or renovating new ones as per green standards wherever possible. Academic institutions in particular have been following this trend globally. This is highly significant as universities usually have high occupancy rates because they manage a large building portfolio. Also, as universities accommodate the future generation of architects, policy makers etc., they have the potential of setting themselves as a best industry practice model for research and innovation for the rest to follow. Hence their climate adaptation, sustainable growth and performance management becomes highly crucial in order to provide the best services to users. With the objective of evaluating appropriate management mechanisms within academic institutions, a feasibility study was carried out in a recent 5-Star Green Star rated university building (housing the School of Construction) in Victoria (south-eastern state of Australia). The key aim was to understand the behavioral and social aspect of the building users, management and the impact of their relationship on overall building sustainability. A survey was used to understand the building occupant’s response and reactions in terms of their work environment and management. A report was generated based on the survey results complemented with utility and performance data which were then used to evaluate the management structure of the university. Followed by the report, interviews were scheduled with the facility and asset managers in order to understand the approach they use to manage the different buildings in their university campuses (old, new, refurbished), respective building and parameters incorporated in maintaining the Green Star performance. The results aimed at closing the communication and feedback loop within the respective institutions and assist the facility managers to deliver appropriate stakeholder engagement. For the wider design community, analysis of the data highlights the applicability and significance of prioritizing key stakeholders, integrating desired engagement policies within an institution’s management structures and frameworks and their effect on building performance

Keywords: building optimization, green building, post occupancy evaluation, stakeholder engagement

Procedia PDF Downloads 352
1144 The Impact of Using Flattening Filter-Free Energies on Treatment Efficiency for Prostate SBRT

Authors: T. Al-Alawi, N. Shorbaji, E. Rashaidi, M.Alidrisi

Abstract:

Purpose/Objective(s): The main purpose of this study is to analyze the planning of SBRT treatments for localized prostate cancer with 6FFF and 10FFF energies to see if there is a dosimetric difference between the two energies and how we can increase the plan efficiency and reduce its complexity. Also, to introduce a planning method in our department to treat prostate cancer by utilizing high energy photons without increasing patient toxicity and fulfilled all dosimetric constraints for OAR (an organ at risk). Then toevaluate the target 95% coverage PTV95, V5%, V2%, V1%, low dose volume for OAR (V1Gy, V2Gy, V5Gy), monitor unit (beam-on time), and estimate the values of homogeneity index HI, conformity index CI a Gradient index GI for each treatment plan.Materials/Methods: Two treatment plans were generated for15 patients with localized prostate cancer retrospectively using the CT planning image acquired for radiotherapy purposes. Each plan contains two/three complete arcs with two/three different collimator angle sets. The maximum dose rate available is 1400MU/min for the energy 6FFF and 2400MU/min for 10FFF. So in case, we need to avoid changing the gantry speed during the rotation, we tend to use the third arc in the plan with 6FFF to accommodate the high dose per fraction. The clinical target volume (CTV) consists of the entire prostate for organ-confined disease. The planning target volume (PTV) involves a margin of 5 mm. A 3-mm margin is favored posteriorly. Organs at risk identified and contoured include the rectum, bladder, penile bulb, femoral heads, and small bowel. The prescription dose is to deliver 35Gyin five fractions to the PTV and apply constraints for organ at risk (OAR) derived from those reported in references. Results: In terms of CI=0.99, HI=0.7, and GI= 4.1, it was observed that they are all thesame for both energies 6FFF and 10FFF with no differences, but the total delivered MUs are much less for the 10FFF plans (2907 for 6FFF vs.2468 for 10FFF) and the total delivery time is 124Sc for 6FFF vs. 61Sc for 10FFF beams. There were no dosimetric differences between 6FFF and 10FFF in terms of PTV coverage and mean doses; the mean doses for the bladder, rectum, femoral heads, penile bulb, and small bowel were collected, and they were in favor of the 10FFF. Also, we got lower V1Gy, V2Gy, and V5Gy doses for all OAR with 10FFF plans. Integral dosesID in (Gy. L) were recorded for all OAR, and they were lower with the 10FFF plans. Conclusion: High energy 10FFF has lower treatment time and lower delivered MUs; also, 10FFF showed lower integral and meant doses to organs at risk. In this study, we suggest usinga 10FFF beam for SBRTprostate treatment, which has the advantage of lowering the treatment time and that lead to lessplan complexity with respect to 6FFF beams.

Keywords: FFF beam, SBRT prostate, VMAT, prostate cancer

Procedia PDF Downloads 81
1143 Bioproduction of L(+)-Lactic Acid and Purification by Ion Exchange Mechanism

Authors: Zelal Polat, Şebnem Harsa, Semra Ülkü

Abstract:

Lactic acid exists in nature optically in two forms, L(+), D(-)-lactic acid, and has been used in food, leather, textile, pharmaceutical and cosmetic industries. Moreover, L(+)-lactic acid constitutes the raw material for the production of poly-L-lactic acid which is used in biomedical applications. Microbially produced lactic acid was aimed to be recovered from the fermentation media efficiently and economically. Among the various downstream operations, ion exchange chromatography is highly selective and yields a low cost product recovery within a short period of time. In this project, Lactobacillus casei NRRL B-441 was used for the production of L(+)-lactic acid from whey by fermentation at pH 5.5 and 37°C that took 12 hours. The product concentration was 50 g/l with 100% L(+)-lactic acid content. Next, the suitable resin was selected due to its high sorption capacity with rapid equilibrium behavior. Dowex marathon WBA, weakly basic anion exchanger in OH form reached the equilibrium in 15 minutes. The batch adsorption experiments were done approximately at pH 7.0 and 30°C and sampling was continued for 20 hours. Furthermore, the effect of temperature and pH was investigated and their influence was found to be unimportant. All the adsorption/desorption experiments were applied to both model lactic acid and biomass free fermentation broth. The ion exchange equilibria of lactic acid and L(+)-lactic acid in fermentation broth on Dowex marathon WBA was explained by Langmuir isotherm. The maximum exchange capacity (qm) for model lactic acid was 0.25 g La/g wet resin and for fermentation broth 0.04 g La/g wet resin. The equilibrium loading and exchange efficiency of L(+)-lactic acid in fermentation broth were reduced as a result of competition by other ionic species. The competing ions inhibit the binding of L(+)-lactic acid to the free sites of ion exchanger. Moreover, column operations were applied to recover adsorbed lactic acid from the ion exchanger. 2.0 M HCl was the suitable eluting agent to recover the bound L(+)-lactic acid with a flowrate of 1 ml/min at ambient temperature. About 95% of bound L(+)-lactic acid was recovered from Dowex marathon WBA. The equilibrium was reached within 15 minutes. The aim of this project was to investigate the purification of L(+)-lactic acid with ion exchange method from fermentation broth. The additional goals were to investigate the end product purity, to obtain new data on the adsorption/desorption behaviours of lactic acid and applicability of the system in industrial usage.

Keywords: fermentation, ion exchange, lactic acid, purification, whey

Procedia PDF Downloads 500
1142 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding

Authors: Ines Oliveira, Ana Reis

Abstract:

Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.

Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation

Procedia PDF Downloads 206