Search results for: targeted maximum likelihood estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7212

Search results for: targeted maximum likelihood estimation

1902 ISIS and Social Media

Authors: Neda Jebellie

Abstract:

New information and communication technologies (ICT) not only has revolutionized the world of communication but has also strongly impacted the state of international terrorism. Using the potential of social media, the new wave of terrorism easily can recruit new jihadi members, spread their violent ideology and garner financial support. IS (Islamic State) as the most dangerous terrorist group has already conquered a great deal of social media space and has deployed sophisticated web-based strategies to promote its extremist doctrine. In this respect the vastly popular social media are the perfect tools for IS to establish its virtual Caliphate (e-caliphate) and e-Ommah (e-citizen).Using social media to release violent videos of beheading journalists, burning their hostages alive and mass killing of prisoners are IS strategies to terrorize and subjugate its enemies. Several Twitter and Facebook accounts which are IS affiliations have targeted young generation of Muslims all around the world. In fact IS terrorists use modern resources of communication not only to share information and conduct operations but also justify their violent acts. The strict Wahhabi doctrine of ISIS is based on a fundamental interpretation of Islam in which religious war against non Muslims (Jihad) and killing infidels (Qatal) have been praised and recommended. Via social media IS disseminates its propaganda to inspire sympathizers across the globe. Combating this new wave of terrorism which is exploiting new communication technologies is the most significant challenge for authorities. Before the rise of internet and social media governments had to control only mosques and religious gathering such as Friday sermons(Jamaah Pray) to prevent spreading extremism among Muslims community in their country. ICT and new communication technologies have heighten the challenge of dealing with Islamic radicalism and have amplified its threat .According to the official reports even some of the governments such as UK have created a special force of Facebook warriors to engage in unconventional warfare in digital age. In compare with other terrorist groups, IS has effectively grasped social media potential. Their horrifying released videos on YouTube easily got viral and were re-twitted and shared by thousands of social media users. While some of the social media such as Twitter and Facebook have shut down many accounts alleged to IS but new ones create immediately so only blocking their websites and suspending their accounts cannot solve the problem as terrorists recreate new accounts. To combat cyber terrorism focusing on disseminating counter narrative strategies can be a solution. Creating websites and providing online materials to propagate peaceful and moderate interpretation of Islam can provide a cogent alternative to extremist views.

Keywords: IS-islamic state, cyber terrorism, social media, terrorism, information, communication technologies

Procedia PDF Downloads 476
1901 Evaluation of the Notifiable Diseases Surveillance System, South, Haiti, 2022

Authors: Djeamsly Salomon

Abstract:

Background: Epidemiological surveillance is a dynamic national system used to observe all aspects of the evolution of priority health problems, through: collection, analysis, systematic interpretation of information, and dissemination of results with necessary recommendations. The study was conducted to assess the mandatory disease surveillance system in the Sud Department. Methods: A study was conducted from March to May 2021 with key players involved in surveillance at the level of health institutions in the department . The CDC's 2021 updated guideline was used to evaluate the system. We collected information about the operation, attributes, and usefulness of the surveillance system using interviewer-administered questionnaires. Epi-Info7.2 and Excel 2016 were used to generate the mean, frequencies and proportions. Results: Of 30 participants, 23 (77%) were women. The average age was 39 years[30-56]. 25 (83%) had training in epidemiological surveillance. (50%) of the forms checked were signed by the supervisor. Collection tools were available at (80%). Knowledge of at least 7 notifiable diseases was high (100%). Among the respondents, 29 declared that the collection tools were simple, 27 had already filled in a notification form. The maximum time taken to fill out a form was 10 minutes. The feedback between the different levels was done at (60%). Conclusion: The surveillance system is useful, simple, acceptable, representative, flexible, stable and responsive. The data generated was of high quality. However, it is threatened by the lack of supervision of sentinel sites, lack of investigation and weak feedback. This evaluation demonstrated the urgent need to improve supervision in the sites and to feedback information. Strengthen epidemiological surveillance.

Keywords: evaluation, notifiable diseases, surveillance, system

Procedia PDF Downloads 65
1900 Strategies for Public Space Utilization

Authors: Ben Levenger

Abstract:

Social life revolves around a central meeting place or gathering space. It is where the community integrates, earns social skills, and ultimately becomes part of the community. Following this premise, public spaces are one of the most important spaces that downtowns offer, providing locations for people to be witnessed, heard, and most importantly, seamlessly integrate into the downtown as part of the community. To facilitate this, these local spaces must be envisioned and designed to meet the changing needs of a downtown, offering a space and purpose for everyone. This paper will dive deep into analyzing, designing, and implementing public space design for small plazas or gathering spaces. These spaces often require a detailed level of study, followed by a broad stroke of design implementation, allowing for adaptability. This paper will highlight how to assess needs, define needed types of spaces, outline a program for spaces, detail elements of design to meet the needs, assess your new space, and plan for change. This study will provide participants with the necessary framework for conducting a grass-roots-level assessment of public space and programming, including short-term and long-term improvements. Participants will also receive assessment tools, sheets, and visual representation diagrams. Urbanism, for the sake of urbanism, is an exercise in aesthetic beauty. An economic improvement or benefit must be attained to solidify these efforts' purpose further and justify the infrastructure or construction costs. We will deep dive into case studies highlighting economic impacts to ground this work in quantitative impacts. These case studies will highlight the financial impact on an area, measuring the following metrics: rental rates (per sq meter), tax revenue generation (sales and property), foot traffic generation, increased property valuations, currency expenditure by tenure, clustered development improvements, cost/valuation benefits of increased density in housing. The economic impact results will be targeted by community size, measuring in three tiers: Sub 10,000 in population, 10,001 to 75,000 in population, and 75,000+ in population. Through this classification breakdown, the participants can gauge the impact in communities similar to their work or for which they are responsible. Finally, a detailed analysis of specific urbanism enhancements, such as plazas, on-street dining, pedestrian malls, etc., will be discussed. Metrics that document the economic impact of each enhancement will be presented, aiding in the prioritization of improvements for each community. All materials, documents, and information will be available to participants via Google Drive. They are welcome to download the data and use it for their purposes.

Keywords: downtown, economic development, planning, strategic

Procedia PDF Downloads 61
1899 Performance Enhancement of Hybrid Racing Car by Design Optimization

Authors: Tarang Varmora, Krupa Shah, Karan Patel

Abstract:

Environmental pollution and shortage of conventional fuel are the main concerns in the transportation sector. Most of the vehicles use an internal combustion engine (ICE), powered by gasoline fuels. This results into emission of toxic gases. Hybrid electric vehicle (HEV) powered by electric machine and ICE is capable of reducing emission of toxic gases and fuel consumption. However to build HEV, it is required to accommodate motor and batteries in the vehicle along with engine and fuel tank. Thus, overall weight of the vehicle increases. To improve the fuel economy and acceleration, the weight of the HEV can be minimized. In this paper, the design methodology to reduce the weight of the hybrid racing car is proposed. To this end, the chassis design is optimized. Further, attempt is made to obtain the maximum strength with minimum material weight. The best configuration out of the three main configurations such as series, parallel and the dual-mode (series-parallel) is chosen. Moreover, the most suitable type of motor, battery, braking system, steering system and suspension system are identified. The racing car is designed and analyzed in the simulating software. The safety of the vehicle is assured by performing static and dynamic analysis on the chassis frame. From the results, it is observed that, the weight of the racing car is reduced by 11 % without compromising on safety and cost. It is believed that the proposed design and specifications can be implemented practically for manufacturing hybrid racing car.

Keywords: design optimization, hybrid racing car, simulation, vehicle, weight reduction

Procedia PDF Downloads 282
1898 Exploring the Motivations That Drive Paper Use in Clinical Practice Post-Electronic Health Record Adoption: A Nursing Perspective

Authors: Sinead Impey, Gaye Stephens, Lucy Hederman, Declan O'Sullivan

Abstract:

Continued paper use in the clinical area post-Electronic Health Record (EHR) adoption is regularly linked to hardware and software usability challenges. Although paper is used as a workaround to circumvent challenges, including limited availability of a computer, this perspective does not consider the important role paper, such as the nurses’ handover sheet, play in practice. The purpose of this study is to confirm the hypothesis that paper use post-EHR adoption continues as paper provides both a cognitive tool (that assists with workflow) and a compensation tool (to circumvent usability challenges). Distinguishing the different motivations for continued paper-use could assist future evaluations of electronic record systems. Methods: Qualitative data were collected from three clinical care environments (ICU, general ward and specialist day-care) who used an electronic record for at least 12 months. Data were collected through semi-structured interviews with 22 nurses. Data were transcribed, themes extracted using an inductive bottom-up coding approach and a thematic index constructed. Findings: All nurses interviewed continued to use paper post-EHR adoption. While two distinct motivations for paper use post-EHR adoption were confirmed by the data - paper as a cognitive tool and paper as a compensation tool - further finding was that there was an overlap between the two uses. That is, paper used as a compensation tool could also be adapted to function as a cognitive aid due to its nature (easy to access and annotate) or vice versa. Rather than present paper persistence as having two distinctive motivations, it is more useful to describe it as presenting on a continuum with compensation tool and cognitive tool at either pole. Paper as a cognitive tool referred to pages such as nurses’ handover sheet. These did not form part of the patient’s record, although information could be transcribed from one to the other. Findings suggest that although the patient record was digitised, handover sheets did not fall within this remit. These personal pages continued to be useful post-EHR adoption for capturing personal notes or patient information and so continued to be incorporated into the nurses’ work. Comparatively, the paper used as a compensation tool, such as pre-printed care plans which were stored in the patient's record, appears to have been instigated in reaction to usability challenges. In these instances, it is expected that paper use could reduce or cease when the underlying problem is addressed. There is a danger that as paper affords nurses a temporary information platform that is mobile, easy to access and annotate, its use could become embedded in clinical practice. Conclusion: Paper presents a utility to nursing, either as a cognitive or compensation tool or combination of both. By fully understanding its utility and nuances, organisations can avoid evaluating all incidences of paper use (post-EHR adoption) as arising from usability challenges. Instead, suitable remedies for paper-persistence can be targeted at the root cause.

Keywords: cognitive tool, compensation tool, electronic record, handover sheet, nurse, paper persistence

Procedia PDF Downloads 420
1897 Investigating Learners’ Online Learning Experiences in a Blended-Learning School Environment

Authors: Abraham Ampong

Abstract:

BACKGROUND AND SIGNIFICANCE OF THE STUDY: The development of information technology and its influence today is inevitable in the world of education. The development of information technology and communication (ICT) has an impact on the use of teaching aids such as computers and the Internet, for example, E-learning. E-learning is a learning process attained through electronic means. But learning is not merely technology because learning is essentially more about the process of interaction between teacher, student, and source study. The main purpose of the study is to investigate learners’ online learning experiences in a blended learning approach, evaluate how learners’ experience of an online learning environment affects the blended learning approach and examine the future of online learning in a blended learning environment. Blended learning pedagogies have been recognized as a path to improve teacher’s instructional strategies for teaching using technology. Blended learning is perceived to have many advantages for teachers and students, including any-time learning, anywhere access, self-paced learning, inquiry-led learning and collaborative learning; this helps institutions to create desired instructional skills such as critical thinking in the process of learning. Blended learning as an approach to learning has gained momentum because of its widespread integration into educational organizations. METHODOLOGY: Based on the research objectives and questions of the study, the study will make use of the qualitative research approach. The rationale behind the selection of this research approach is that participants are able to make sense of their situations and appreciate their construction of knowledge and understanding because the methods focus on how people understand and interpret their experiences. A case study research design is adopted to explore the situation under investigation. The target population for the study will consist of selected students from selected universities. A simple random sampling technique will be used to select the targeted population. The data collection instrument that will be adopted for this study will be questions that will serve as an interview guide. An interview guide is a set of questions that an interviewer asks when interviewing respondents. Responses from the in-depth interview will be transcribed into word and analyzed under themes. Ethical issues to be catered for in this study include the right to privacy, voluntary participation, and no harm to participants, and confidentiality. INDICATORS OF THE MAJOR FINDINGS: It is suitable for the study to find out that online learning encourages timely feedback from teachers or that online learning tools are okay to use without issues. Most of the communication with the teacher can be done through emails and text messages. It is again suitable for sampled respondents to prefer online learning because there are few or no distractions. Learners can have access to technology to do other activities to support their learning”. There are, again, enough and enhanced learning materials available online. CONCLUSION: Unlike the previous research works focusing on the strengths and weaknesses of blended learning, the present study aims at the respective roles of its two modalities, as well as their interdependencies.

Keywords: online learning, blended learning, technologies, teaching methods

Procedia PDF Downloads 76
1896 Hierarchical Optimization of Composite Deployable Bridge Treadway Using Particle Swarm Optimization

Authors: Ashraf Osman

Abstract:

Effective deployable bridges that are characterized by an increased capacity to weight ratio are recently needed for post-disaster rapid mobility and military operations. In deployable bridging, replacing metals as the fabricating material with advanced composite laminates as lighter alternatives with higher strength is highly advantageous. This article presents a hierarchical optimization strategy of a composite bridge treadway considering maximum strength design and bridge weight minimization. Shape optimization of a generic deployable bridge beam cross-section is performed to achieve better stress distribution over the bridge treadway hull. The developed cross-section weight is minimized up to reserving the margins of safety of the deployable bridging code provisions. Hence, the strength of composite bridge plates is maximized through varying the plies orientation. Different loading cases are considered of a tracked vehicle patch load. The orthotropic plate properties of a composite sandwich core are used to simulate the bridge deck structural behavior. Whereas, the failure analysis is conducted using Tsai-Wu failure criterion. The naturally inspired particle swarm optimization technique is used in this study. The proposed technique efficiently reduced the weight to capacity ratio of the developed bridge beam.

Keywords: CFRP deployable bridges, disaster relief, military bridging, optimization of composites, particle swarm optimization

Procedia PDF Downloads 121
1895 Finite Element Analysis and Design Optimization of Stent and Balloon System

Authors: V. Hashim, P. N. Dileep

Abstract:

Stent implantation is being seen as the most successful method to treat coronary artery diseases. Different types of stents are available in the market these days and the success of a stent implantation greatly depends on the proper selection of a suitable stent for a patient. Computer numerical simulation is the cost effective way to choose the compatible stent. Studies confirm that the design characteristics of stent do have great importance with regards to the pressure it can sustain, the maximum displacement it can produce, the developed stress concentration and so on. In this paper different designs of stent were analyzed together with balloon to optimize the stent and balloon system. Commercially available stent Palmaz-Schatz has been selected for analysis. Abaqus software is used to simulate the system. This work is the finite element analysis of the artery stent implant to find out the design factors affecting the stress and strain. The work consists of two phases. In the first phase, stress distribution of three models were compared - stent without balloon, stent with balloon of equal length and stent with balloon of extra length than stent. In second phase, three different design models of Palmaz-Schatz stent were compared by keeping the balloon length constant. The results obtained from analysis shows that, the design of the strut have strong effect on the stress distribution. A design with chamfered slots found better results. The length of the balloon also has influence on stress concentration of the stent. Increase in length of the balloon will reduce stress, but will increase dog boning effect.

Keywords: coronary stent, finite element analysis, restenosis, stress concentration

Procedia PDF Downloads 613
1894 Chemical Analysis of Particulate Matter (PM₂.₅) and Volatile Organic Compound Contaminants

Authors: S. Ebadzadsahraei, H. Kazemian

Abstract:

The main objective of this research was to measure particulate matter (PM₂.₅) and Volatile Organic Compound (VOCs) as two classes of air pollutants, at Prince George (PG) neighborhood in warm and cold seasons. To fulfill this objective, analytical protocols were developed for accurate sampling and measurement of the targeted air pollutants. PM₂.₅ samples were analyzed for their chemical composition (i.e., toxic trace elements) in order to assess their potential source of emission. The City of Prince George, widely known as the capital of northern British Columbia (BC), Canada, has been dealing with air pollution challenges for a long time. The city has several local industries including pulp mills, a refinery, and a couple of asphalt plants that are the primary contributors of industrial VOCs. In this research project, which is the first study of this kind in this region it measures physical and chemical properties of particulate air pollutants (PM₂.₅) at the city neighborhood. Furthermore, this study quantifies the percentage of VOCs at the city air samples. One of the outcomes of this project is updated data about PM₂.₅ and VOCs inventory in the selected neighborhoods. For examining PM₂.₅ chemical composition, an elemental analysis methodology was developed to measure major trace elements including but not limited to mercury and lead. The toxicity of inhaled particulates depends on both their physical and chemical properties; thus, an understanding of aerosol properties is essential for the evaluation of such hazards, and the treatment of such respiratory and other related diseases. Mixed cellulose ester (MCE) filters were selected for this research as a suitable filter for PM₂.₅ air sampling. Chemical analyses were conducted using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) for elemental analysis. VOCs measurement of the air samples was performed using a Gas Chromatography-Flame Ionization Detector (GC-FID) and Gas Chromatography-Mass Spectrometry (GC-MS) allowing for quantitative measurement of VOC molecules in sub-ppb levels. In this study, sorbent tube (Anasorb CSC, Coconut Charcoal), 6 x 70-mm size, 2 sections, 50/100 mg sorbent, 20/40 mesh was used for VOCs air sampling followed by using solvent extraction and solid-phase micro extraction (SPME) techniques to prepare samples for measuring by a GC-MS/FID instrument. Air sampling for both PM₂.₅ and VOC were conducted in summer and winter seasons for comparison. Average concentrations of PM₂.₅ are very different between wildfire and daily samples. At wildfire time average of concentration is 83.0 μg/m³ and daily samples are 23.7 μg/m³. Also, higher concentrations of iron, nickel and manganese found at all samples and mercury element is found in some samples. It is able to stay too high doses negative effects.

Keywords: air pollutants, chemical analysis, particulate matter (PM₂.₅), volatile organic compound, VOCs

Procedia PDF Downloads 131
1893 Following the Modulation of Transcriptional Activity of Genes by Chromatin Modifications during the Cell Cycle in Living Cells

Authors: Sharon Yunger, Liat Altman, Yuval Garini, Yaron Shav-Tal

Abstract:

Understanding the dynamics of transcription in living cells has improved since the development of quantitative fluorescence-based imaging techniques. We established a method for following transcription from a single copy gene in living cells. A gene tagged with MS2 repeats, used for mRNA tagging, in its 3' UTR was integrated into a single genomic locus. The actively transcribing gene was detected and analyzed by fluorescence in situ hybridization (FISH) and live-cell imaging. Several cell clones were created that differed in the promoter regulating the gene. Thus, comparative analysis could be obtained without the risk of different position effects at each integration site. Cells in S/G2 phases could be detected exhibiting two adjacent transcription sites on sister chromatids. A sharp reduction in the transcription levels was observed as cells progressed along the cell cycle. We hypothesized that a change in chromatin structure acts as a general mechanism during the cell cycle leading to down-regulation in the activity of some genes. We addressed this question by treating the cells with chromatin decondensing agents. Quantifying and imaging the treated cells suggests that chromatin structure plays a role both in regulating transcriptional levels along the cell cycle, as well as in limiting an active gene from reaching its maximum transcription potential at any given time. These results contribute to understanding the role of chromatin as a regulator of gene expression.

Keywords: cell cycle, living cells, nucleus, transcription

Procedia PDF Downloads 286
1892 An Analytical Formulation of Pure Shear Boundary Condition for Assessing the Response of Some Typical Sites in Mumbai

Authors: Raj Banerjee, Aniruddha Sengupta

Abstract:

An earthquake event, associated with a typical fault rupture, initiates at the source, propagates through a rock or soil medium and finally daylights at a surface which might be a populous city. The detrimental effects of an earthquake are often quantified in terms of the responses of superstructures resting on the soil. Hence, there is a need for the estimation of amplification of the bedrock motions due to the influence of local site conditions. In the present study, field borehole log data of Mangalwadi and Walkeswar sites in Mumbai city are considered. The data consists of variation of SPT N-value with the depth of soil. A correlation between shear wave velocity (Vₛ) and SPT N value for various soil profiles of Mumbai city has been developed using various existing correlations which is used further for site response analysis. MATLAB program is developed for studying the ground response analysis by performing two dimensional linear and equivalent linear analysis for some of the typical Mumbai soil sites using pure shear (Multi Point Constraint) boundary condition. The model is validated in linear elastic and equivalent linear domain using the popular commercial program, DEEPSOIL. Three actual earthquake motions are selected based on their frequency contents and durations and scaled to a PGA of 0.16g for the present ground response analyses. The results are presented in terms of peak acceleration time history with depth, peak shear strain time history with depth, Fourier amplitude versus frequency, response spectrum at the surface etc. The peak ground acceleration amplification factors are found to be about 2.374, 3.239 and 2.4245 for Mangalwadi site and 3.42, 3.39, 3.83 for Walkeswar site using 1979 Imperial Valley Earthquake, 1989 Loma Gilroy Earthquake and 1987 Whitter Narrows Earthquake, respectively. In the absence of any site-specific response spectrum for the chosen sites in Mumbai, the generated spectrum at the surface may be utilized for the design of any superstructure at these locations.

Keywords: deepsoil, ground response analysis, multi point constraint, response spectrum

Procedia PDF Downloads 169
1891 Analysis of the Effects of Vibrations on Tractor Drivers by Measurements With Wearable Sensors

Authors: Gubiani Rino, Nicola Zucchiatti, Da Broi Ugo, Bietresato Marco

Abstract:

The problem of vibrations in agriculture is very important due to the different types of machinery used for the different types of soil in which work is carried out. One of the most commonly used machines is the tractor, where the phenomenon has been studied for a long time by measuring the whole body and placing the sensor on the seat. However, this measurement system does not take into account the characteristics of the drivers, such as their body index (BMI), their gender (male, female) or the muscle fatigue they are subjected to, which is highly dependent on their age for example. The aim of the research was therefore to place sensors not only on the seat but along the spinal column to check the transmission of vibration on drivers with different BMI on different tractors and at different travel speeds and of different genders. The test was also done using wearable sensors such as a dynamometer applied to the muscles, the data of which was correlated with the vibrations produced by the tractor. Initial data show that even on new tractors with pneumatic seats, the vibrations attenuate little and are still correlated with the roughness of the track travelled and the forward speed. Another important piece of data are the root-mean square values referred to 8 hours (A(8)x,y,z) and the maximum transient vibration values (MTVVx,y,z) and, the latter, the MTVVz values were problematic (limiting factor in most cases) and always aggravated by the speed. The MTVVx values can be lowered by having a tyre-pressure adjustment system, able to properly adjust the tire pressure according to the specific situation (ground, speed) in which a tractor is operating.

Keywords: fatigue, effect vibration on health, tractor driver vibrations, vibration, muscle skeleton disorders

Procedia PDF Downloads 53
1890 Laboratory Scale Experimental Studies on CO₂ Based Underground Coal Gasification in Context of Clean Coal Technology

Authors: Geeta Kumari, Prabu Vairakannu

Abstract:

Coal is the largest fossil fuel. In India, around 37 % of coal resources found at a depth of more than 300 meters. In India, more than 70% of electricity production depends on coal. Coal on combustion produces greenhouse and pollutant gases such as CO₂, SOₓ, NOₓ, and H₂S etc. Underground coal gasification (UCG) technology is an efficient and an economic in-situ clean coal technology, which converts these unmineable coals into valuable calorific gases. The UCG syngas (mainly H₂, CO, CH₄ and some lighter hydrocarbons) which can utilized for the production of electricity and manufacturing of various useful chemical feedstock. It is an inherent clean coal technology as it avoids ash disposal, mining, transportation and storage problems. Gasification of underground coal using steam as a gasifying medium is not an easy process because sending superheated steam to deep underground coal leads to major transportation difficulties and cost effective. Therefore, for reducing this problem, we have used CO₂ as a gasifying medium, which is a major greenhouse gas. This paper focus laboratory scale underground coal gasification experiment on a coal block by using CO₂ as a gasifying medium. In the present experiment, first, we inject oxygen for combustion for 1 hour and when the temperature of the zones reached to more than 1000 ºC, and then we started supplying of CO₂ as a gasifying medium. The gasification experiment was performed at an atmospheric pressure of CO₂, and it was found that the amount of CO produced due to Boudouard reaction (C+CO₂  2CO) is around 35%. The experiment conducted to almost 5 hours. The maximum gas composition observed, 35% CO, 22 % H₂, and 11% CH4 with LHV 248.1 kJ/mol at CO₂/O₂ ratio 0.4 by volume.

Keywords: underground coal gasification, clean coal technology, calorific value, syngas

Procedia PDF Downloads 213
1889 Increasing of Gain in Unstable Thin Disk Resonator

Authors: M. Asl. Dehghan, M. H. Daemi, S. Radmard, S. H. Nabavi

Abstract:

Thin disk lasers are engineered for efficient thermal cooling and exhibit superior performance for this task. However the disk thickness and large pumped area make the use of this gain format in a resonator difficult when constructing a single-mode laser. Choosing an unstable resonator design is beneficial for this purpose. On the other hand, the low gain medium restricts the application of unstable resonators to low magnifications and therefore to a poor beam quality. A promising idea to enable the application of unstable resonators to wide aperture, low gain lasers is to couple a fraction of the out coupled radiation back into the resonator. The output coupling gets dependent on the ratio of the back reflection and can be adjusted independently from the magnification. The excitation of the converging wave can be done by the use of an external reflector. The resonator performance is numerically predicted. First of all the threshold condition of linear, V and 2V shape resonator is investigated. Results show that the maximum magnification is 1.066 that is very low for high quality purposes. Inserting an additional reflector covers the low gain. The reflectivity and the related magnification of a 350 micron Yb:YAG disk are calculated. The theoretical model was based on the coupled Kirchhoff integrals and solved numerically by the Fox and Li algorithm. Results show that with back reflection mechanism in combination with increasing the number of beam incidents on disk, high gain and high magnification can occur.

Keywords: unstable resonators, thin disk lasers, gain, external reflector

Procedia PDF Downloads 399
1888 Node Optimization in Wireless Sensor Network: An Energy Approach

Authors: Y. B. Kirankumar, J. D. Mallapur

Abstract:

Wireless Sensor Network (WSN) is an emerging technology, which has great invention for various low cost applications both for mass public as well as for defence. The wireless sensor communication technology allows random participation of sensor nodes with particular applications to take part in the network, which results in most of the uncovered simulation area, where fewer nodes are located at far distances. The drawback of such network would be that the additional energy is spent by the nodes located in a pattern of dense location, using more number of nodes for a smaller distance of communication adversely in a region with less number of nodes and additional energy is again spent by the source node in order to transmit a packet to neighbours, thereby transmitting the packet to reach the destination. The proposed work is intended to develop Energy Efficient Node Placement Algorithm (EENPA) in order to place the sensor node efficiently in simulated area, where all the nodes are equally located on a radial path to cover maximum area at equidistance. The total energy consumed by each node compared to random placement of nodes is less by having equal burden on fewer nodes of far location, having distributed the nodes in whole of the simulation area. Calculating the network lifetime also proves to be efficient as compared to random placement of nodes, hence increasing the network lifetime, too. Simulation is been carried out in a qualnet simulator, results are obtained on par with random placement of nodes with EENP algorithm.

Keywords: energy, WSN, wireless sensor network, energy approach

Procedia PDF Downloads 298
1887 A Literature Review on Bladder Management in Individuals with Spinal Cord Injury

Authors: Elif Ates, Naile Bilgili

Abstract:

Background: One of the most important medical complications that individuals with spinal cord injury (SCI) face are the neurogenic bladder. Objectives: To review methods used for management of neurogenic bladder and their effects. Methods: The study was conducted by searching CINAHL, Ebscohost, MEDLINE, Science Direct, Ovid, ProQuest, Web of Science, and ULAKBİM National Databases for studies published between 2005 and 2015. Key words used during the search included ‘spinal cord injury’, ‘bladder injury’, ‘nursing care’, ‘catheterization’ and ‘intermittent urinary catheter’. After examination of 551 studies, 21 studies which met inclusion criteria were included in the review. Results: Mean age of individuals in all study samples was 42 years. The most commonly used bladder management method was clean intermittent catheterization (CIC). Compliance with CIC was found to be significantly related to spasticity, maximum cystometric capacity, and the person performing catheterization (p < .05). The main reason for changing the existing bladder management method was urinary tract infections (UTI). Individuals who performed CIC by themselves and who voided spontaneously had better life quality. Patient age, occupation status and whether they performed CIC by themselves or not were found to be significantly associated with depression level (p ≤ .05). Conclusion: As the most commonly used method for bladder management, CIC is a reliable and effective method, and reduces the risk of UTI development. Individuals with neurogenic bladder have a higher prevalence of depression symptoms than the normal population.

Keywords: bladder management, catheterization, nursing, spinal cord injury

Procedia PDF Downloads 164
1886 Visualization of Interaction between Pochonia Chlamydosporia and Meloidogyne Incognita and Their Impact on Tomato Crop

Authors: Saifullah K., Muhammad Naziruddin Saifullah, Muhammad N.

Abstract:

The bio control potential and mechanism of P. chlamydosporia against Meloidogyne incognita was evaluated in the present study. Under invitro conditions, P. chlamydosporia was tested for parasitism of eggs and females of M. incognita. The results indicated that this fungus parasitized 87% eggs and 82% females. Culture filtrate (CF) of P. chlamydosporia was tested for its larvicide activity against M. incognita 2nd stage juvenile. The maximum mortality was 97.3% at 100% concentration of the culture filtrate while minimum mortality was 7.3% in 25% concentration after 24 hrs. The result of the pot experiment proved that P. chlamydosporia has reduced the incidence of RKN and improved all tested agronomic growth parameters. The treatment with inoculated M. incognita alone reduced plant height, fresh shoot, and fresh root weight by 44.7%, 29.8%, and 32.8% respectively over uninoculated healthy control. Histopathological studies on the interaction of Pochonia chlamydosporia and Meloidogyne incognita on tomato roots revealed anatomical changes among treatments. Less number of galls with small in size and scarcer abnormalities in the vascular cylinder was observed in plants inoculated with P. chlamydosporia and M. incognita than the plants treated with nematode only. The fungus was seen in in the intercellular spaces of cortical and epidermal cells while the vascular bundles of the plant remain intact, inoculated only with P. chlamydosporia. In the infected roots, many mature females were seen which feed on giant cells. The findings also revealed that control healthy plants were not affected and no histological changes were noted.

Keywords: histopathology, Pochonia chlamydosporia, Meloidogyne incognita, tomato

Procedia PDF Downloads 92
1885 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species

Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel

Abstract:

Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.

Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis

Procedia PDF Downloads 68
1884 Effects of Rockdust as a Soil Stabilizing Agent on Poor Subgrade Soil

Authors: Muhammad Munawar

Abstract:

Pavement destruction is normally associated with the horizontal relocation of subgrade because of pavement engrossing water and inordinate avoidance and differential settlement of material underneath the pavement. The aim of the research is to study the effect of the additives (rockdust) on the stability and the increase of bearing capacity of selected soils in Mardan City. The physical, chemical and designing properties of soil were contemplated, and the soil was treated with added admixture rockdust with the goal of stabilizing the local soil. The stabilization or modification of soil is done by blending of rock dust to soils in the scope of 0 to 85% by the rate increment of 5%, 10%, and 15% individually. The following test was done for treated sample: Atterberg limits (liquid limit, plasticity index, plastic limit), standard compaction test, the California bearing test and the direct shear test. The results demonstrated that the gradation of soil is narrow from the particle size analysis. Plasticity index (P.I), Liquid limit (L.L) and plastic limit (P.L) were shown reduction with the addition of Rock dust. It was concluded that the maximum dry density is increasing with the addition of rockdust up to 10%, beyond 10%, it shows reduction in their content. It was discovered that the Cohesion C diminished, the angle of internal friction and the California bearing ratio (C.B.R) was improved with the addition of Rock dust. The investigation demonstrated that the best stabilizer for the contextual investigation (Toru road Mardan) is the rock dust and the ideal dosage is 10 %.

Keywords: rockdust, stabilization, modification, CBR

Procedia PDF Downloads 264
1883 Long-Term Monitoring and Seasonal Analysis of PM10-Bound Benzo(a)pyrene in the Ambient Air of Northwestern Hungary

Authors: Zs. Csanádi, A. Szabó Nagy, J. Szabó, J. Erdős

Abstract:

Atmospheric aerosols have several important environmental impacts and health effects in point of air quality. Monitoring the PM10-bound polycyclic aromatic hydrocarbons (PAHs) could have important environmental significance and health protection aspects. Benzo(a)pyrene (BaP) is the most relevant indicator of these PAH compounds. In Hungary, the Hungarian Air Quality Network provides air quality monitoring data for several air pollutants including BaP, but these data show only the annual mean concentrations and maximum values. Seasonal variation of BaP concentrations comparing the heating and non-heating periods could have important role and difference as well. For this reason, the main objective of this study was to assess the annual concentration and seasonal variation of BaP associated with PM10 in the ambient air of Northwestern Hungary seven different sampling sites (six urban and one rural) in the sampling period of 2008–2013. A total of 1475 PM10 aerosol samples were collected in the different sampling sites and analyzed for BaP by gas chromatography method. The BaP concentrations ranged from undetected to 8 ng/m3 with the mean value range of 0.50-0.96 ng/m3 referring to all sampling sites. Relatively higher concentrations of BaP were detected in samples collected in each sampling site in the heating seasons compared with non-heating periods. The annual mean BaP concentrations were comparable with the published data of the other Hungarian sites.

Keywords: air quality, benzo(a)pyrene, PAHs, polycyclic aromatic hydrocarbons

Procedia PDF Downloads 291
1882 Exploring the Social Health and Well-Being Factors of Hydraulic Fracturing

Authors: S. Grinnell

Abstract:

A PhD Research Project exploring the Social Health and Well-Being Impacts associated with Hydraulic Fracturing, with an aim to produce a Best Practice Support Guidance for those anticipating dealing with planning applications or submitting Environmental Impact Assessments (EIAs). Amid a possible global energy crisis, founded upon a number of factors, including unstable political situations, increasing world population growth, people living longer, it is perhaps inevitable that Hydraulic Fracturing (commonly referred to as ‘fracking’) will become a major player within the global long-term energy and sustainability agenda. As there is currently no best practice guidance for governing bodies the Best Practice Support Document will be targeted at a number of audiences including, consultants undertaking EIAs, Planning Officers, those commissioning EIAs Industry and interested public stakeholders. It will offer a robust, evidence-based criteria and recommendations which provide a clear narrative and consistent and shared approach to the language used along with containing an understanding of the issues identified. It is proposed that the Best Practice Support Document will also support the mitigation of health impacts identified. The Best Practice Support Document will support the newly amended Environmental Impact Assessment Directive (2011/92/EU), to be transposed into UK law by 2017. A significant amendment introduced focuses on, ‘higher level of protection to the environment and health.’ Methodology: A qualitative research methods approach is being taken with this research. It will have a number of key stages. A literature review has been undertaken and been critically reviewed and analysed. This was followed by a descriptive content analysis of a selection of international and national policies, programmes and strategies along with published Environmental Impact Assessments and associated planning guidance. In terms of data collection, a number of stakeholders were interviewed as well as a number of focus groups of local community groups potentially affected by fracking. These were determined from across the UK. A theme analysis of all the data collected and the literature review will be undertaken, using NVivo. Best Practice Supporting Document will be developed based on the outcomes of the analysis and be tested and piloted in the professional fields, before a live launch. Concluding statement: Whilst fracking is not a new concept, the technology is now driving a new force behind the use of this engineering to supply fuels. A number of countries have pledged moratoria on fracking until further investigation from the impacts on health have been explored, whilst other countries including Poland and the UK are pushing to support the use of fracking. If this should be the case, it will be important that the public’s concerns, perceptions, fears and objections regarding the wider social health and well-being impacts are considered along with the more traditional biomedical health impacts.

Keywords: fracking, hydraulic fracturing, socio-economic health, well-being

Procedia PDF Downloads 227
1881 Biosorption of Manganese Mine Effluents Using Crude Chitin from Philippine Bivalves

Authors: Randy Molejona Jr., Elaine Nicole Saquin

Abstract:

The area around the Ajuy river in Iloilo, Philippines, is currently being mined for manganese ore, and river water samples exceed the maximum manganese contaminant level set by US-EPA. At the same time, the surplus of local bivalve waste is another environmental concern. Synthetic chemical treatment compromises water quality, leaving toxic residues. Therefore, an alternative treatment process is biosorption or using the physical and chemical properties of biomass to adsorb heavy metals in contaminated water. The study aims to extract crude chitin from shell wastes of Bractechlamys vexillum, Perna viridis, and Placuna placenta and determine its adsorption capacity on manganese in simulated and actual mine water. Crude chitin was obtained by pulverization, deproteinization, demineralization, and decolorization of shells. Biosorption by flocculation followed 5 g: 50 mL chitin-to-water ratio. Filtrates were analyzed using MP-AES after 24 hours. In both actual and simulated mine water, respectively, B. vexillum yielded the highest adsorption percentage of 91.43% and 99.58%, comparable to P. placenta of 91.43% and 99.37%, while significantly different to P. viridis of -57.14% and 31.53%, (p < 0.05). FT-IR validated the presence of chitin in shells based on carbonyl-containing functional groups at peaks 1530-1560 cm⁻¹ and 1660-1680 cm⁻¹. SEM micrographs showed the amorphous and non-homogenous structure of chitin. Thus, crude chitin from B. vexillum and P. placenta can be bio-sorbents for water treatment of manganese-impacted effluents, and promote appropriate waste management of local bivalves.

Keywords: biosorption, chitin, FT-IR, mine effluents, SEM

Procedia PDF Downloads 180
1880 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators

Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros

Abstract:

Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.

Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis

Procedia PDF Downloads 121
1879 Technology in the Calculation of People Health Level: Design of a Computational Tool

Authors: Sara Herrero Jaén, José María Santamaría García, María Lourdes Jiménez Rodríguez, Jorge Luis Gómez González, Adriana Cercas Duque, Alexandra González Aguna

Abstract:

Background: Health concept has evolved throughout history. The health level is determined by the own individual perception. It is a dynamic process over time so that you can see variations from one moment to the next. In this way, knowing the health of the patients you care for, will facilitate decision making in the treatment of care. Objective: To design a technological tool that calculates the people health level in a sequential way over time. Material and Methods: Deductive methodology through text analysis, extraction and logical knowledge formalization and education with expert group. Studying time: September 2015- actually. Results: A computational tool for the use of health personnel has been designed. It has 11 variables. Each variable can be given a value from 1 to 5, with 1 being the minimum value and 5 being the maximum value. By adding the result of the 11 variables we obtain a magnitude in a certain time, the health level of the person. The health calculator allows to represent people health level at a time, establishing temporal cuts being useful to determine the evolution of the individual over time. Conclusion: The Information and Communication Technologies (ICT) allow training and help in various disciplinary areas. It is important to highlight their relevance in the field of health. Based on the health formalization, care acts can be directed towards some of the propositional elements of the concept above. The care acts will modify the people health level. The health calculator allows the prioritization and prediction of different strategies of health care in hospital units.

Keywords: calculator, care, eHealth, health

Procedia PDF Downloads 247
1878 Evaluation of the Self-Organizing Map and the Adaptive Neuro-Fuzzy Inference System Machine Learning Techniques for the Estimation of Crop Water Stress Index of Wheat under Varying Application of Irrigation Water Levels for Efficient Irrigation Scheduling

Authors: Aschalew C. Workneh, K. S. Hari Prasad, C. S. P. Ojha

Abstract:

The crop water stress index (CWSI) is a cost-effective, non-destructive, and simple technique for tracking the start of crop water stress. This study investigated the feasibility of CWSI derived from canopy temperature to detect the water status of wheat crops. Artificial intelligence (AI) techniques have become increasingly popular in recent years for determining CWSI. In this study, the performance of two AI techniques, adaptive neuro-fuzzy inference system (ANFIS) and self-organizing maps (SOM), are compared while determining the CWSI of paddy crops. Field experiments were conducted for varying irrigation water applications during two seasons in 2022 and 2023 at the irrigation field laboratory at the Civil Engineering Department, Indian Institute of Technology Roorkee, India. The ANFIS and SOM-simulated CWSI values were compared with the experimentally calculated CWSI (EP-CWSI). Multiple regression analysis was used to determine the upper and lower CWSI baselines. The upper CWSI baseline was found to be a function of crop height and wind speed, while the lower CWSI baseline was a function of crop height, air vapor pressure deficit, and wind speed. The performance of ANFIS and SOM were compared based on mean absolute error (MAE), mean bias error (MBE), root mean squared error (RMSE), index of agreement (d), Nash-Sutcliffe efficiency (NSE), and coefficient of correlation (R²). Both models successfully estimated the CWSI of the paddy crop with higher correlation coefficients and lower statistical errors. However, the ANFIS (R²=0.81, NSE=0.73, d=0.94, RMSE=0.04, MAE= 0.00-1.76 and MBE=-2.13-1.32) outperformed the SOM model (R²=0.77, NSE=0.68, d=0.90, RMSE=0.05, MAE= 0.00-2.13 and MBE=-2.29-1.45). Overall, the results suggest that ANFIS is a reliable tool for accurately determining CWSI in wheat crops compared to SOM.

Keywords: adaptive neuro-fuzzy inference system, canopy temperature, crop water stress index, self-organizing map, wheat

Procedia PDF Downloads 38
1877 Relationships Between the Petrophysical and Mechanical Properties of Rocks and Shear Wave Velocity

Authors: Anamika Sahu

Abstract:

The Himalayas, like many mountainous regions, is susceptible to multiple hazards. In recent times, the frequency of such disasters is continuously increasing due to extreme weather phenomena. These natural hazards are responsible for irreparable human and economic loss. The Indian Himalayas has repeatedly been ruptured by great earthquakes in the past and has the potential for a future large seismic event as it falls under the seismic gap. Damages caused by earthquakes are different in different localities. It is well known that, during earthquakes, damage to the structure is associated with the subsurface conditions and the quality of construction materials. So, for sustainable mountain development, prior estimation of site characterization will be valuable for designing and constructing the space area and for efficient mitigation of the seismic risk. Both geotechnical and geophysical investigation of the subsurface is required to describe the subsurface complexity. In mountainous regions, geophysical methods are gaining popularity as areas can be studied without disturbing the ground surface, and also these methods are time and cost-effective. The MASW method is used to calculate the Vs30. Vs30 is the average shear wave velocity for the top 30m of soil. Shear wave velocity is considered the best stiffness indicator, and the average of shear wave velocity up to 30 m is used in National Earthquake Hazards Reduction Program (NEHRP) provisions (BSSC,1994) and Uniform Building Code (UBC), 1997 classification. Parameters obtained through geotechnical investigation have been integrated with findings obtained through the subsurface geophysical survey. Joint interpretation has been used to establish inter-relationships among mineral constituents, various textural parameters, and unconfined compressive strength (UCS) with shear wave velocity. It is found that results obtained through the MASW method fitted well with the laboratory test. In both conditions, mineral constituents and textural parameters (grain size, grain shape, grain orientation, and degree of interlocking) control the petrophysical and mechanical properties of rocks and the behavior of shear wave velocity.

Keywords: MASW, mechanical, petrophysical, site characterization

Procedia PDF Downloads 73
1876 Utilization of Brachystegia Spiciformis Leaf Powder in the Removal of Nitrates from Wastewaters: An Equilibrium Study

Authors: Isheanesu Hungwe, Munyaradzi Shumba, Tichaona Nharingo

Abstract:

High levels of nitrates in drinking water present a potential risk to human health for it is responsible for methemoglobinemia in infants. It also gives rise to eutrophication of dams and rivers. It is, therefore, important to find ways of compating the increasing amount of nitrates in the environment. This study explored the bioremediation of nitrates from aqueous solution using Brachystegia spiciformis leaf powder (BSLP). The acid treated leaf powder was characterized using FTIR and SEM before and after nitrate biosorption and desorption experiments. Critical biosorption factors, pH, contact time and biomass dosage were optimized as 4, 30 minutes and 10 g/L respectively. The equilibrium data generated from the investigation of the effect of initial nitrate ion concentration fitted the isotherm models in the order Dudinin-Radushkevich < Halsey=Freundlich < Langmuir < Temkin model based on the correlation of determination (R2). The Freundlich’s adsorption intensity and Langmuir’s separation factors revealed the favorability of nitrate ion sorption onto BSLP biomass with maximum sorption capacity of 87.297 mg/g. About 95% of the adsorbed nitrate was removed from the biomass under alkaline conditions (pH 11) proving that the regeration of the biomass, critical in sorption-desorption cycles, was possible. It was concluded that the BSLP was a multifunctional group material characterised by both micropores and macropores that could be effectively utilised in nitrate ion removal from aqueous solutions.

Keywords: adsorption, brachystegia spiciformis, methemoglobinemia, nitrates

Procedia PDF Downloads 235
1875 An Analytical Approach for Medication Protocol Errors from Pediatric Nurse Curriculum

Authors: Priyanka Jani

Abstract:

The main focus of this research is to consider the objective of nursing curriculum in concern with pediatric nurses in respect to various parameters such as causes, reporting and prevention of medication protocol errors. A design or method selected for the study is the descriptive and cross sectional with respect to analytical study. Nurses were selected from inpatient pediatric wards of 5 hospitals in Gujarat, as a population. 126 pediatric nurses gave approval to participate in the research and completed with quarter questionnaires. The actual data was collected and analyzed. The actual data was collected and analyzed. The medium age of the nurses was 25.7 ± 3.68 years; the maximum was lady (97.6%) pediatric nurses stated that the most common causes of medication protocol errors were large work time (69.2%) and a huge ratio of patient: nurse (59.9%). Even though the highest number of nurses (89%) made use of a medication protocol errors notification system, or else they use to check it before. Many errors were not reported and nurses cited abeyant claims of nurses in case of adverse and opposite output for patient (53.97%), distrust (52.45%), and fear of various/different protocol for mediations (42%) among the causes of insufficient of notification in concern to ignorance, nurses most commonly noted the requirement for efficient data concerning the safe use of medications (47.5%). This is the frequent study made by researcher in Gujarat about the pediatric nurse curriculum regarding medication protocol errors. The outputs debate that there is a requirement for ongoing coaching of pediatric nurses regarding safe & secure medication observation and that the causes and post reporting of medication protocol errors by hand further survey.

Keywords: pediatric, medication, protocol, errors

Procedia PDF Downloads 283
1874 Numerical Investigation of Fluid Flow, Characteristics of Thermal Performance and Enhancement of Heat Transfer of Corrugated Pipes with Various Geometrical Configurations

Authors: Ahmed Ramadhan Al-Obaidi, Jassim Alhamid

Abstract:

In this investigation, the flow pattern, characteristics of thermal-hydraulic, and improvement of heat transfer performance are evaluated using a numerical technique in three dimensions corrugated pipe heat exchanger. The modification was made under different corrugated pipe geometrical parameters, including corrugated ring angle (CRA), distance between corrugated ring (DBCR), and corrugated diameter (CD), the range of Re number from 2000 to 12000. The numerical results are validated with available experimental data. The numerical outcomes reveal that there is an important change in flow field behaviour and a significant increase in friction factor and improvement in heat transfer performance owing to the use of the corrugated shape in the heat exchanger pipe as compared to the conventional smooth pipe. Using corrugated pipe with different configurations makes the flow more turbulence, flow separation, boundary layer distribution, flow mixing, and that leads to augmenting the performance of heat transfer. Moreover, the value of pressure drop, and the Nusselt number increases as the corrugated pipe geometrical parameters increase. Furthermore, the corrugation configuration shapes have an important influence on the thermal evaluation performance factor, and the maximum value was more than 1.3. Numerical simulation can be performed to predict the various geometrical configurations effects on fluid flow, thermal performance, and heat transfer enhancement.

Keywords: corrugated ring angle, corrugated diameter, Nusselt number, heat transfer

Procedia PDF Downloads 126
1873 Effect of Microwave Radiations on Natural Dyes’ Application on Cotton

Authors: Rafia Asghar, Abdul Hafeez

Abstract:

The current research was related with natural dyes’ extraction from the powder of Neem (Azadirachta indica) bark and studied characterization of this dye under microwave radiation’s influence. Both cotton fabric and dyeing powder were exposed to microwave rays for different time intervals (2minutes, 4 minutes, 6 minutes, 8 minutes and 10 minutes) using conventional oven. Aqueous, 60% Methanol and Ethyl Acetate solubilized extracts obtained from Neem (Azadirachta indica) bark were also exposed to different time intervals (2minutes, 4 minutes, 6 minutes, 8 minutes and 10 minutes) of microwave rays exposure. Pre, meta and post mordanting with Alum (2%, 4%, 6%, 8%, and 10%) was done to improve color strength of the extracted dye. Exposure of Neem (Azadirachta indica) bark extract and cotton to microwave rays enhanced the extraction process and dyeing process by reducing extraction time, dyeing time and dyeing temperature. Microwave rays treatment had a very strong influence on color fastness and color strength properties of cotton that was dyes using Neem (Azadirachta indica) bark for 30 minutes and dyeing cotton with that Neem bark extract for 75 minutes at 30°C. Among pre, meta and post mordanting, results indicated that 5% concentration of Alum in meta mordanting exhibited maximum color strength.

Keywords: dyes, natural dyeing, ecofriendly dyes, microwave treatment

Procedia PDF Downloads 678