Search results for: detect and avoid
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2966

Search results for: detect and avoid

1226 Design and Simulation of a Radiation Spectrometer Using Scintillation Detectors

Authors: Waleed K. Saib, Abdulsalam M. Alhawsawi, Essam Banoqitah

Abstract:

The idea of this research is to design a radiation spectrometer using LSO scintillation detector coupled to a C series of SiPM (silicon photomultiplier). The device can be used to detects gamma and X-ray radiation. This device is also designed to estimates the activity of the source contamination. The SiPM will detect light in the visible range above the threshold and read them as counts. Three gamma sources were used for these experiments Cs-137, Am-241 and Co-60 with various activities. These sources are applied for four experiments operating the SiPM as a spectrometer, energy resolution, pile-up set and efficiency. The SiPM is connected to a MCA to perform as a spectrometer. Cerium doped Lutetium Silicate (Lu₂SiO₅) with light yield 26000 photons/Mev coupled with the SiPM. As a result, all the main features of the Cs-137, Am-241 and Co-60 are identified in MCA. The experiment shows how photon energy and probability of interaction are inversely related. Total attenuation reduces as photon energy increases. An analytical calculation was made to obtain the FWHM resolution for each gamma source. The FWHM resolution for Am-241 (59 keV) is 28.75 %, for Cs-137 (662 keV) is 7.85 %, for Co-60 (1173 keV) is 4.46 % and for Co-60 (1332 keV) is 3.70%. Moreover, the experiment shows that the dead time and counts number decreased when the pile-up rejection was disabled and the FWHM decreased when the pile-up was enabled. The efficiencies were calculated at four different distances from the detector 2, 4, 8 and 16 cm. The detection efficiency was observed to declined exponentially with increasing distance from the detector face. Conclusively, the SiPM board operated with an LSO scintillator crystal as a spectrometer. The SiPM energy resolution for the three gamma sources used was a decent comparison to other PMTs.

Keywords: PMT, radiation, radiation detection, scintillation detectors, silicon photomultiplier, spectrometer

Procedia PDF Downloads 142
1225 Clinical Signs of Neonatal Calves in Experimental Colisepticemia

Authors: Samad Lotfollahzadeh

Abstract:

Escherichia coli (E.coli) is the most isolated bacteria from blood circulation of septicemic calves. Given the prevalence of septicemia in animals and its economic importance in veterinary practice, better understanding of changes in clinical signs following disease, may contribute to early detection of the disorder. The present study has been carried out to detect changes of clinical signs in induced sepsis in calves with E.coli. Colisepticemia has been induced in 10 twenty-day old healthy Holstein- Frisian calves with intravenous injection of 1.5 X 109 colony forming units (cfu) of O111: H8 strain of E.coli. Clinical signs including rectal temperature, heart rate, respiratory rate, shock, appetite, sucking reflex, feces consistency, general behavior, dehydration and standing ability were recorded in experimental calves during 24 hours after induction of colisepticemia. Blood culture was also carried out from calves four times during the experiment. ANOVA with repeated measure is used to see changes of calves’ clinical signs to experimental colisepticemia, and values of P≤ 0.05 was considered statistically significant. Mean values of rectal temperature and heart rate as well as median values of respiratory rate, appetite, suckling reflex, standing ability and feces consistency of experimental calves increased significantly during the study (P<0.05). In the present study, median value of shock score was not significantly increased in experimental calves (P> 0.05). The results of present study showed that total score of clinical signs in calves with experimental colisepticemia increased significantly, although the score of some clinical signs such as shock did not change significantly.

Keywords: calves, clinical signs scoring, E. coli O111:H8, experimental colisepticemia

Procedia PDF Downloads 361
1224 A Study of Fatigue Life Estimation of a Modular Unmanned Aerial Vehicle by Developing a Structural Health Monitoring System

Authors: Zain Ul Hassan, Muhammad Zain Ul Abadin, Muhammad Zubair Khan

Abstract:

Unmanned aerial vehicles (UAVs) have now become of predominant importance for various operations, and an immense amount of work is going on in this specific category. The structural stability and life of these UAVs is key factor that should be considered while deploying them to different intelligent operations as their failure leads to loss of sensitive real-time data and cost. This paper presents an applied research on the development of a structural health monitoring system for a UAV designed and fabricated by deploying modular approach. Firstly, a modular UAV has been designed which allows to dismantle and to reassemble the components of the UAV without effecting the whole assembly of UAV. This novel approach makes the vehicle very sustainable and decreases its maintenance cost to a significant value by making possible to replace only the part leading to failure. Then the SHM for the designed architecture of the UAV had been specified as a combination of wings integrated with strain gauges, on-board data logger, bridge circuitry and the ground station. For the research purpose sensors have only been attached to the wings being the most load bearing part and as per analysis was done on ANSYS. On the basis of analysis of the load time spectrum obtained by the data logger during flight, fatigue life of the respective component has been predicted using fracture mechanics techniques of Rain Flow Method and Miner’s Rule. Thus allowing us to monitor the health of a specified component time to time aiding to avoid any failure.

Keywords: fracture mechanics, rain flow method, structural health monitoring system, unmanned aerial vehicle

Procedia PDF Downloads 276
1223 Discriminant Analysis of Pacing Behavior on Mass Start Speed Skating

Authors: Feng Li, Qian Peng

Abstract:

The mass start speed skating (MSSS) is a new event for the 2018 PyeongChang Winter Olympics and will be an official race for the 2022 Beijing Winter Olympics. Considering that the event rankings were based on points gained on laps, it is worthwhile to investigate the pacing behavior on each lap that directly influences the ranking of the race. The aim of this study was to detect the pacing behavior and performance on MSSS regarding skaters’ level (SL), competition stage (semi-final/final) (CS) and gender (G). All the men's and women's races in the World Cup and World Championships were analyzed in the 2018-2019 and 2019-2020 seasons. As a result, a total of 601 skaters from 36 games were observed. ANOVA for repeated measures was applied to compare the pacing behavior on each lap, and the three-way ANOVA for repeated measures was used to identify the influence of SL, CS, and G on pacing behavior and total time spent. In general, the results showed that the pacing behavior from fast to slow were cluster 1—laps 4, 8, 12, 15, 16, cluster 2—laps 5, 9, 13, 14, cluster 3—laps 3, 6, 7, 10, 11, and cluster 4—laps 1 and 2 (p=0.000). For CS, the total time spent in the final was less than the semi-final (p=0.000). For SL, top-level skaters spent less total time than the middle-level and low-level (p≤0.002), while there was no significant difference between the middle-level and low-level (p=0.214). For G, the men’s skaters spent less total time than women on all laps (p≤0.048). This study could help to coach staff better understand the pacing behavior regarding SL, CS, and G, further providing references concerning promoting the pacing strategy and decision making before and during the race.

Keywords: performance analysis, pacing strategy, winning strategy, winter Olympics

Procedia PDF Downloads 185
1222 The Sustainability of Human Resource Planning for Construction Projects

Authors: Adegbenga Ashiru, Adebimpe L. Ashiru

Abstract:

The construction industry is considered to work by diversifying personnel. Hence managing human resource is an issue considered to be a highly challenging task. Nonetheless, HR planning for the construction project is a very critical aspect of managing human resource within an expanding nature of construction industry, and there are rising concerns over the failure of construction planning to achieve its goals in spite of the substantial resources allocated to it and as a result of different planning strategies. To justify the above statement, this research was carried out to examine the sustainability of HR planning for construction project. Based on the researcher’s experience, a quantitative approach was adopted that provided a broader understanding of the research and was analysed using descriptive statistics and inferential statistics. The Statistical Package for the Social Sciences (SPSS) was used to obtain the descriptive and inferential statistical analysis. However, research findings showed that literature sources agreed with varying challenges of HR planning on construction projects which were justified by empirical findings. Also, the paper identified four major factors and the key consideration for Project HR Planning (Organisation’s structure with right individuals at right positions and evaluation current resources) will lead to the efficient utilisation implementation of new HR Planning technique and tools for a construction project. Essentially the main reoccurring theme identified was that management of the construction organisations needs to look into the essential factors needed to be considered at the strategic level. Furthermore, leaders leading a construction project team should consider those essential factors needed at the operational level to clarify the numerous functions of HRM in the construction organisations and avoid inconsistencies among several practices on construction projects. The Sustainability of HR planning for construction project policy was indicated and recommendations were made for further future research.

Keywords: construction industry, HRM planning in construction, SHRM in construction, HR planning in construction

Procedia PDF Downloads 331
1221 Using Machine Learning to Build a Real-Time COVID-19 Mask Safety Monitor

Authors: Yash Jain

Abstract:

The US Center for Disease Control has recommended wearing masks to slow the spread of the virus. The research uses a video feed from a camera to conduct real-time classifications of whether or not a human is correctly wearing a mask, incorrectly wearing a mask, or not wearing a mask at all. Utilizing two distinct datasets from the open-source website Kaggle, a mask detection network had been trained. The first dataset that was used to train the model was titled 'Face Mask Detection' on Kaggle, where the dataset was retrieved from and the second dataset was titled 'Face Mask Dataset, which provided the data in a (YOLO Format)' so that the TinyYoloV3 model could be trained. Based on the data from Kaggle, two machine learning models were implemented and trained: a Tiny YoloV3 Real-time model and a two-stage neural network classifier. The two-stage neural network classifier had a first step of identifying distinct faces within the image, and the second step was a classifier to detect the state of the mask on the face and whether it was worn correctly, incorrectly, or no mask at all. The TinyYoloV3 was used for the live feed as well as for a comparison standpoint against the previous two-stage classifier and was trained using the darknet neural network framework. The two-stage classifier attained a mean average precision (MAP) of 80%, while the model trained using TinyYoloV3 real-time detection had a mean average precision (MAP) of 59%. Overall, both models were able to correctly classify stages/scenarios of no mask, mask, and incorrectly worn masks.

Keywords: datasets, classifier, mask-detection, real-time, TinyYoloV3, two-stage neural network classifier

Procedia PDF Downloads 142
1220 Quantum Statistical Machine Learning and Quantum Time Series

Authors: Omar Alzeley, Sergey Utev

Abstract:

Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.

Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series

Procedia PDF Downloads 452
1219 The Exploration of Persuasive Skills and Participants Characteristics in Pyramid-Sale: A Qualitative Study

Authors: Xing Yan Fan, Xing Lin Xu, Man Yuan Chen, Pei Tzu Lee, Yu Ting Wang, Yi Xiao Cao, Rui Yao

Abstract:

Pyramid sales have been a widespread issue in China. Victims who are defrauded not only lose money but damage interpersonal relationship. A deeper understanding of pyramid-sale models can be beneficial to prevent potential victims from fraud and improve the property security. The goals of this study were to detect psychological characteristics of pyramid-sale sellers, and analyse persuasive skills in pyramid organizations. A qualitative study was conducted in this study. Participants (n=6) recruited by 'snowball' sampling from present pyramid-sale sellers (n=3) and imprisoned pyramid-sale sellers (n=3). All participants accepted semi-structured interview for collecting data. Content analysis was adopted for data coding and analysis. The results indicate that pyramid organizations are used to utilize their appearance packaging and celebrity effect to strengthen the positions in participants’ mind. The status gap between pyramid-sale sellers in same organization, as well as rewards to increase reputation, are used to motivate participants in pyramid. The most significant common characteristics among all participants are that they tend to possess a high sense of belongingness within the firm. Moreover, the expression of pyramid-sale sellers on gambling mentality is expected to growth as constantly losing money. Findings suggest that the psychological characteristics of pyramid-sale sellers in accordance with Maslow’s hierarchy of needs, persuasive skills of pyramid organization confront to 'attitude-behaviour change model'. These findings have implication on 'immune education' that providing guidance for victims out of stuck and protecting ordinary people from the jeopardizing of pyramid sales.

Keywords: pyramid sales, characteristics, persuasive skills, qualitative study

Procedia PDF Downloads 240
1218 Lessons Learnt from a Patient with Pseudohyperkalaemia Secondary to Polycythaemia Rubra Vera in a Neuro-ICU Patient Resulting in Dangerous Interventions: Lessons Learnt on Patient Safety Improvement

Authors: Dinoo Kirthinanda, Sujani Wijeratne

Abstract:

Pseudohyperkalaemia is a common benign in vitro phenomenon caused by the release of potassium ions (K+) from cells during specimen processing. Analysis of haemolysed blood samples for predominantly intracellular electrolytes may lead to re-investigation and potentially harmful interventions. We report a case of a 52-year male with myeloproliferative disease manifested as Polycythaemia Rubra Vera, Hypertension and hypertensive nephropathy with stage 3 chronic kidney disease admitted to Neuro-intensive care unit (NICU) with an intra-cerebral haemorrhage secondary to hypertensive bleed. His initial blood investigations showed hyperkalemia with serum K+ 6.2 mmol/L yet the bedside arterial blood gas analysis yielded K+ of 4.6 mmol/L. The patient was however given hyperkalemia regime twice based on venous electrolyte analysis. The discrepancy between the bedside electrolyte analysis using arterial blood and venous blood prompted further evaluation. The 12 lead Electrocardiogram showed U waves and sinus bradycardia corresponding to the serum K+ of 2.8 mmol/L on arterial blood gas analysis. Immediate K+ replacement ensured the patient did not develop life-threatening cardiac complications. Pseudohyperkalaemia may pose diagnostic challenges in the absence of detectable haemolysis and should be suspected in susceptible patients with normal Electrocardiogram and Glomerular Filtration Rate to avoid potentially life-threatening interventions. When in doubt, rapid analysis of arterial blood gas may be useful for accurate quantification of potassium.

Keywords: patient safety, pseudohyperkalaemia, haemolysis, myeloproliferative disorder

Procedia PDF Downloads 134
1217 The Impact of Hospital Intensive Care Unit Window Design on Daylighting and Energy Performance in Desert Climate

Authors: A. Sherif, H. Sabry, A. Elzafarany, M. Gadelhak, R. Arafa, M. Aly

Abstract:

This paper addresses the design of hospital Intensive Care Unit windows for the achievement of visual comfort and energy savings. The aim was to identify the window size and shading system configurations that could fulfill daylighting adequacy, avoid glare and reduce energy consumption. The study focused on addressing the effect of utilizing different shading systems in association with a range of Window-to-Wall Ratios (WWR) in different orientations under the desert clear-sky of Cairo, Egypt. The results of this study demonstrated that solar penetration is a critical concern affecting the design of ICU windows in desert locations, as in Cairo, Egypt. Use of shading systems was found to be essential in providing acceptable daylight performance and energy saving. Careful positioning of the ICU window towards a proper orientation can dramatically improve performance. It was observed that ICU windows facing the north direction enjoyed the widest range of successful window configuration possibilities at different WWRs. ICU windows facing south enjoyed a reasonable number of configuration options as well. By contrast, the ICU windows facing the east orientation had a very limited number of options that provide acceptable performance. These require additional local shading measures at certain times due to glare incidence. Moreover, use of horizontal sun breakers and solar screens to protect the ICU windows proved to be more successful than the other alternatives in a wide range of Window to Wall Ratios. By contrast, the use of light shelves and vertical shading devices seemed questionable.

Keywords: daylighting, desert, energy efficiency, shading

Procedia PDF Downloads 419
1216 Drought Risk Analysis Using Neural Networks for Agri-Businesses and Projects in Lejweleputswa District Municipality, South Africa

Authors: Bernard Moeketsi Hlalele

Abstract:

Drought is a complicated natural phenomenon that creates significant economic, social, and environmental problems. An analysis of paleoclimatic data indicates that severe and extended droughts are inevitable part of natural climatic circle. This study characterised drought in Lejweleputswa using both Standardised Precipitation Index (SPI) and neural networks (NN) to quantify and predict respectively. Monthly 37-year long time series precipitation data were obtained from online NASA database. Prior to the final analysis, this dataset was checked for outliers using SPSS. Outliers were removed and replaced by Expectation Maximum algorithm from SPSS. This was followed by both homogeneity and stationarity tests to ensure non-spurious results. A non-parametric Mann Kendall's test was used to detect monotonic trends present in the dataset. Two temporal scales SPI-3 and SPI-12 corresponding to agricultural and hydrological drought events showed statistically decreasing trends with p-value = 0.0006 and 4.9 x 10⁻⁷, respectively. The study area has been plagued with severe drought events on SPI-3, while on SPI-12, it showed approximately a 20-year circle. The concluded the analyses with a seasonal analysis that showed no significant trend patterns, and as such NN was used to predict possible SPI-3 for the last season of 2018/2019 and four seasons for 2020. The predicted drought intensities ranged from mild to extreme drought events to come. It is therefore recommended that farmers, agri-business owners, and other relevant stakeholders' resort to drought resistant crops as means of adaption.

Keywords: drought, risk, neural networks, agri-businesses, project, Lejweleputswa

Procedia PDF Downloads 113
1215 An Extensive Review Of Drought Indices

Authors: Shamsulhaq Amin

Abstract:

Drought can arise from several hydrometeorological phenomena that result in insufficient precipitation, soil moisture, and surface and groundwater flow, leading to conditions that are considerably drier than the usual water content or availability. Drought is often assessed using indices that are associated with meteorological, agricultural, and hydrological phenomena. In order to effectively handle drought disasters, it is essential to accurately determine the kind, intensity, and extent of the drought using drought characterization. This information is critical for managing the drought before, during, and after the rehabilitation process. Over a hundred drought assessments have been created in literature to evaluate drought disasters, encompassing a range of factors and variables. Some models utilise solely hydrometeorological drivers, while others employ remote sensing technology, and some incorporate a combination of both. Comprehending the entire notion of drought and taking into account drought indices along with their calculation processes are crucial for researchers in this discipline. Examining several drought metrics in different studies requires additional time and concentration. Hence, it is crucial to conduct a thorough examination of approaches used in drought indices in order to identify the most straightforward approach to avoid any discrepancies in numerous scientific studies. In case of practical application in real-world, categorizing indices relative to their usage in meteorological, agricultural, and hydrological phenomena might help researchers maximize their efficiency. Users have the ability to explore different indexes at the same time, allowing them to compare the convenience of use and evaluate the benefits and drawbacks of each. Moreover, certain indices exhibit interdependence, which enhances comprehension of their connections and assists in making informed decisions about their suitability in various scenarios. This study provides a comprehensive assessment of various drought indices, analysing their types and computation methodologies in a detailed and systematic manner.

Keywords: drought classification, drought severity, drought indices, agricultur, hydrological

Procedia PDF Downloads 23
1214 Assessment of the Effect of Farmer-Herder Conflict on the Livelihood of Rural Households in Bogoro Local Government Area of Bauchi State, Nigeria

Authors: Luka Jumma Gizaki

Abstract:

The study assessed the effect of farmer-herder conflict on the livelihood of rural households in Bogoro L.G.A., Bauchi State, Nigeria. Multistage sampling procedures were used to randomly select 66 crop farmers in the study area. Data were collected by means of a structured questionnaire. The result was analyzed using descriptive and inferential statistics. Results showed that the majority of the respondents were males with a mean age of 39 years and a farming experience of 16 years. About 95% of the respondents had formal education, with a mean household size of 8 persons per household. Farmer-herder conflicts were found to be caused by grazing on growing crops, wrong approach by farmers in raising complaints and harassment of herdsmen, absence of grazing route and poisoning of uncultivated lands. Constraints to resolving conflict were found to include personal interest, lack of government will, ethnicity and religious difference and open grazing ranking first, second and third, among others. Six factors connected to farmer-herder conflict were found to significantly affect the livelihood of rural households. These were the value of crops destroyed, the number of livestock lost, and the cost of treatment of wounds sustained from the conflict. The value of crops and the size of farmland abandoned in fear were significant, and the cost of seeking redress was significant at P≤0.01. It was concluded that farmer-herder conflict impacts negatively not only crops and animals but also affects the lives of farmers and herders as well as their economy. It is recommended that proper methods be adopted to avoid its occurrence, and when it occurs, the erring party should be appropriately punished.

Keywords: farmer, herder, conflict, effect, coping

Procedia PDF Downloads 11
1213 Analyzing Use of Figurativeness, Visual Elements, Allegory, Scenic Imagery as Support System in Punjabi Contemporary Theatre for Escaping Censorship

Authors: Shazia Anwer

Abstract:

This paper has discussed the unusual form of resistance in theatre against censorship board in Pakistan. The atypical approach of dramaturgy created massive space for performers and audiences to integrate and communicate. The social and religious absolutes creates suffocation in Pakistani society, strict control over all Fine and Performing Art has made art political, contemporary dramatics has started an amalgamated theatre to avoid censorship. Contemporary Punjabi theatre techniques are directly dependent on human cognition. The idea of indirect thought processing is not unique but dependent on spectators. The paper has provided an account of these techniques and their specific use for conveying specific messages across the audiences. For the Dramaturge of today, theatre space is an expression representing a linguistic formulation that includes qualities of experimental and non-traditional use of classical theatrical space in the context of fulfilling the concept of open theatre. Paper has explained the transformation of the theatrical experience into an event where the actor and the audience are co-existing and co-experiencing the dramatical experience. The denial of the existence of the 4th -Wall made two-way communication possible. This paper has elaborated that the previously marginalized genres such as naach, jugat, miras, are extensively included to counter the censorship board. Figurativeness, visual elements, allegory, scenic imagery are basic support system for contemporary Punjabi theatre. The body of the actor is used as a source for non-verbal communication, and for an escape from traditional theatrical space which by every means has every element that could be controlled and reprimanded by the controlling authority.

Keywords: communication, Punjabi theatre, figurativeness, censorship

Procedia PDF Downloads 123
1212 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method

Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang

Abstract:

Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.

Keywords: Chronic Kidney Disease, Linear Regression, Microfluidics, Urinary Albumin

Procedia PDF Downloads 121
1211 Performance Analysis of Pumps-as-Turbine Under Cavitating Conditions

Authors: Calvin Stephen, Biswajit Basu, Aonghus McNabola

Abstract:

Market liberalization in the power sector has led to the emergence of micro-hydropower schemes that are dependent on the use of pumps-as-turbines in applications that were not suitable as potential hydropower sites in earlier years. These applications include energy recovery in water supply networks, sewage systems, irrigation systems, alcohol breweries, underground mining and desalination plants. As a result, there has been an accelerated adoption of pumpsas-turbine technology due to the economic advantages it presents in comparison to the conventional turbines in the micro-hydropower space. The performance of this machines under cavitation conditions, however, is not well understood as there is a deficiency of knowledge in literature focused on their turbine mode of operation. In hydraulic machines, cavitation is a common occurrence which needs to be understood to safeguard them and prolong their operation life. The overall purpose of this study is to investigate the effects of cavitation on the performance of a pumps-as-turbine system over its entire operating range. At various operating speeds, the cavitating region is identified experimentally while monitoring the effects this has on the power produced by the machine. Initial results indicate occurrence of cavitation at higher flow rates for lower operating speeds and at lower flow rates at higher operating speeds. This implies that for cavitation free operation, low speed pumps-as-turbine must be used for low flow rate conditions whereas for sites with higher flow rate conditions high speed turbines should be adopted. Such a complete understanding of pumps-as-turbine suction performance can aid avoid cavitation induced failures hence improved reliability of the micro-hydropower plant.

Keywords: cavitation, micro-hydropower, pumps-as-turbine, system design

Procedia PDF Downloads 91
1210 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 140
1209 Analysis of Pressure Drop in a Concentrated Solar Collector with Direct Steam Production

Authors: Sara Sallam, Mohamed Taqi, Naoual Belouaggadia

Abstract:

Solar thermal power plants using parabolic trough collectors (PTC) are currently a powerful technology for generating electricity. Most of these solar power plants use thermal oils as heat transfer fluid. The latter is heated in the solar field and transfers the heat absorbed in an oil-water heat exchanger for the production of steam driving the turbines of the power plant. Currently, we are seeking to develop PTCs with direct steam generation (DSG). This process consists of circulating water under pressure in the receiver tube to generate steam directly into the solar loop. This makes it possible to reduce the investment and maintenance costs of the PTCs (the oil-water exchangers are removed) and to avoid the environmental risks associated with the use of thermal oils. The pressure drops in these systems are an important parameter to ensure their proper operation. The determination of these losses is complex because of the presence of the two phases, and most often we limit ourselves to describing them by models using empirical correlations. A comparison of these models with experimental data was performed. Our calculations focused on the evolution of the pressure of the liquid-vapor mixture along the receiver tube of a PTC-DSG for pressure values and inlet flow rates ranging respectively from 3 to 10 MPa, and from 0.4 to 0.6 kg/s. The comparison of the numerical results with experience allows us to demonstrate the validity of some models according to the pressures and the flow rates of entry in the PTC-DSG receiver tube. The analysis of these two parameters’ effects on the evolution of the pressure along the receiving tub, shows that the increase of the inlet pressure and the decrease of the flow rate lead to minimal pressure losses.

Keywords: direct steam generation, parabolic trough collectors, Ppressure drop, empirical models

Procedia PDF Downloads 127
1208 Effects of Probiotics on Specific Immunity in Broiler Chicken in Syria

Authors: Moussa Majed, Omar Yaser

Abstract:

The main objective of this experiment was to study the impact of Probiotic compound on the specific immunity as the case study of infectious bursal disease. Total of 8000 one-day old Ross 108 broiler were randomly divided into two experimental groups; control group (4500 birds) and experimental group (3500 birds). Birds in two groups were reared under similar environmental conditions. Birds in control group received basal diets without probiotic whereas the birds in experimental one were fed basal diets supplemented with a commercial probiotic mixture) probiotic lacting k, which contains bacteria cells beyond to lactobacillus, Streptococcus and bifidobacterium genus that are isolated from gut microflora in healthy chickens(. The commercial probiotic were used according to the manufacturer instruction. 400 blood samples for each group were collected from wing vein every 5-7 days as interval period till 42 days old. Indirect Enzyme-Linked Immunosorbent Assay (ELISA) test was performed to detect the level of infectious bursal disease virus (IBDV) antibodies. The results clearly showed that the mean of immune titers was significantly (p= 0.03) higher in trail group than control one. The coefficient of variance percentages were 55% and 39% for control and trial groups respectively, this illustrates that homogeneity of immunity titers in the trail group was much better comparing with control group. The values of geometric means of titers in the control group and trial group were reported 3820 and 8133, respectively. The crude mortality rate in the experimental group was two times lower comparing with control group (14% and 28% respectively, p = 0.005

Keywords: probiotic, broiler chicken, infectious bursal disease, immunity, ELISA test

Procedia PDF Downloads 55
1207 Synthesis and Characterization of CNPs Coated Carbon Nanorods for Cd2+ Ion Adsorption from Industrial Waste Water and Reusable for Latent Fingerprint Detection

Authors: Bienvenu Gael Fouda Mbanga

Abstract:

This study reports a new approach of preparation of carbon nanoparticles coated cerium oxide nanorods (CNPs/CeONRs) nanocomposite and reusing the spent adsorbent of Cd2+- CNPs/CeONRs nanocomposite for latent fingerprint detection (LFP) after removing Cd2+ ions from aqueous solution. CNPs/CeONRs nanocomposite was prepared by using CNPs and CeONRs with adsorption processes. The prepared nanocomposite was then characterized by using UV-visible spectroscopy (UV-visible), Fourier transforms infrared spectroscopy (FTIR), X-ray diffraction pattern (XRD), scanning electron microscope (SEM), Transmission electron microscopy (TEM), Energy-dispersive X-ray spectroscopy (EDS), Zeta potential, X-ray photoelectron spectroscopy (XPS). The average size of the CNPs was 7.84nm. The synthesized CNPs/CeONRs nanocomposite has proven to be a good adsorbent for Cd2+ removal from water with optimum pH 8, dosage 0. 5 g / L. The results were best described by the Langmuir model, which indicated a linear fit (R2 = 0.8539-0.9969). The adsorption capacity of CNPs/CeONRs nanocomposite showed the best removal of Cd2+ ions with qm = (32.28-59.92 mg/g), when compared to previous reports. This adsorption followed pseudo-second order kinetics and intra particle diffusion processes. ∆G and ∆H values indicated spontaneity at high temperature (40oC) and the endothermic nature of the adsorption process. CNPs/CeONRs nanocomposite therefore showed potential as an effective adsorbent. Furthermore, the metal loaded on the adsorbent Cd2+- CNPs/CeONRs has proven to be sensitive and selective for LFP detection on various porous substrates. Hence Cd2+-CNPs/CeONRs nanocomposite can be reused as a good fingerprint labelling agent in LFP detection so as to avoid secondary environmental pollution by disposal of the spent adsorbent.

Keywords: Cd2+-CNPs/CeONRs nanocomposite, cadmium adsorption, isotherm, kinetics, thermodynamics, reusable for latent fingerprint detection

Procedia PDF Downloads 100
1206 Detection of Trends and Break Points in Climatic Indices: The Case of Umbria Region in Italy

Authors: A. Flammini, R. Morbidelli, C. Saltalippi

Abstract:

The increase of air surface temperature at global scale is a fact, with values around 0.85 ºC since the late nineteen century, as well as a significant change in main features of rainfall regime. Nevertheless, the detected climatic changes are not equally distributed all over the world, but exhibit specific characteristics in different regions. Therefore, studying the evolution of climatic indices in different geographical areas with a prefixed standard approach becomes very useful in order to analyze the existence of climatic trend and compare results. In this work, a methodology to investigate the climatic change and its effects on a wide set of climatic indices is proposed and applied at regional scale in the case study of a Mediterranean area, Umbria region in Italy. From data of the available temperature stations, nine temperature indices have been obtained and the existence of trends has been checked by applying the non-parametric Mann-Kendall test, while the non-parametric Pettitt test and the parametric Standard Normal Homogeneity Test (SNHT) have been applied to detect the presence of break points. In addition, aimed to characterize the rainfall regime, data from 11 rainfall stations have been used and a trend analysis has been performed on cumulative annual rainfall depth, daily rainfall, rainy days, and dry periods length. The results show a general increase in any temperature indices, even if with a trend pattern dependent of indices and stations, and a general decrease of cumulative annual rainfall and average daily rainfall, with a time rainfall distribution over the year different from the past.

Keywords: climatic change, temperature, rainfall regime, trend analysis

Procedia PDF Downloads 98
1205 Integrating a Security Operations Centre with an Organization’s Existing Procedures, Policies and Information Technology Systems

Authors: M. Mutemwa

Abstract:

A Cybersecurity Operation Centre (SOC) is a centralized hub for network event monitoring and incident response. SOCs are critical when determining an organization’s cybersecurity posture because they can be used to detect, analyze and report on various malicious activities. For most organizations, a SOC is not part of the initial design and implementation of the Information Technology (IT) environment but rather an afterthought. As a result, it is not natively a plug and play component; therefore, there are integration challenges when a SOC is introduced into an organization. A SOC is an independent hub that needs to be integrated with existing procedures, policies and IT systems of an organization such as the service desk, ticket logging system, reporting, etc. This paper discussed the challenges of integrating a newly developed SOC to an organization’s existing IT environment. Firstly, the paper begins by looking at what data sources should be incorporated into the Security Information and Event Management (SIEM) such as which host machines, servers, network end points, software, applications, web servers, etc. for security posture monitoring. That is which systems need to be monitored first and the order by which the rest of the systems follow. Secondly, the paper also describes how to integrate the organization’s ticket logging system with the SOC SIEM. That is how the cybersecurity related incidents should be logged by both analysts and non-technical employees of an organization. Also the priority matrix for incident types and notifications of incidents. Thirdly, the paper looks at how to communicate awareness campaigns from the SOC and also how to report on incidents that are found inside the SOC. Lastly, the paper looks at how to show value for the large investments that are poured into designing, building and running a SOC.

Keywords: cybersecurity operation centre, incident response, priority matrix, procedures and policies

Procedia PDF Downloads 139
1204 The Protein Interactome of Escherichia coli Glutaredoxin 3 Expands its Possible Cellular Functions

Authors: Charalampos N. Bompas, Eleni Poulou-Sidiropoulou, Martina Samiotaki, Alexios Vlamis-Gardikas

Abstract:

Ιn all living organisms, antioxidant defenses are orchestrated by the thioredoxin (Trx) and glutaredoxin (Grx) systems. The Trx system of Escherichia coli (E. coli) is comprised of Trx1 and Trx2, both reduced by thioredoxin reductase (TrxR). The Grx system consists of four Grxs (Grx1, Grx2, Grx3, and Grx4), all reduced by glutathione (GSH) except for Grx4, which is reduced by TrxR. Under normal conditions, the GSH reductase of the Grx system keeps GSH at its reduced state. NADPH+ provides the electrons for all reductions in the Trx and Grx systems. Although the role of the E. coli Trx system is widely known, the function of the Grx system reflects the main property of Grx1, which is the reduction of ribonucleotide reductase Ia (RRIa). E. coli Grx3 (encoded by grxC) may also reduce RRIa in vitro but with slow kinetics. The molecule may account for up to 0.4% of total soluble protein and has been the subject of extensive structural studies. Its biological function, however, remains unknown. Herein, affinity chromatography with monothiol Grx3 serving as bait was used to detect the interactions of Grx3 with other proteins. Different types of interactions were identified (covalent, weak, and strong non-covalent) that suggested novel functions for Grx3. In silico approaches were employed to validate selected interactions. In addition, total protein extracts from the null mutant for grxC and the wild-type strain were compared. The overall findings suggest that Grx3 is involved in various metabolic processes, protein synthesis, and stress responses, expanding the recognized functions of Grx3 beyond the possible reduction of RRIa.

Keywords: escherichia coli, glutaredoxin 3, interactome, thiol-disulfide oxidoreductase

Procedia PDF Downloads 35
1203 Electrochemical Detection of the Chemotherapy Agent Methotrexate in vitro from Physiological Fluids Using Functionalized Carbon Nanotube past Electrodes

Authors: Shekher Kummari, V. Sunil Kumar, K. Vengatajalabathy Gobi

Abstract:

A simple, cost-effective, reusable and reagent-free electrochemical biosensor is developed with functionalized multiwall carbon nanotube paste electrode (f-CNTPE) for the sensitive and selective determination of the important chemotherapeutic drug methotrexate (MTX), which is widely used for the treatment of various cancer and autoimmune diseases. The electrochemical response of the fabricated electrode towards the detection of MTX is examined by cyclic voltammetry (CV), differential pulse voltammetry (DPV) and square wave voltammetry (SWV). CV studies have shown that f-CNTPE electrode system exhibited an excellent electrocatalytic activity towards the oxidation of MTX in phosphate buffer (0.2 M) compared with a conventional carbon paste electrode (CPE). The oxidation peak current is enhanced by nearly two times in magnitude. Applying the DPV method under optimized conditions, a linear calibration plot is achieved over a wide range of concentration from 4.0×10⁻⁷ M to 5.5×10⁻⁶ M with the detection limit 1.6×10⁻⁷ M. further, by applying the SWV method a parabolic calibration plot was achieved starting from a very low concentration of 1.0×10⁻⁸ M, and the sensor could detect as low as 2.9×10⁻⁹ M MTX in 10 s and 10 nM were detected in steady state current-time analysis. The f-CNTPE shows very good selectivity towards the specific recognition of MTX in the presence of important biological interference. The electrochemical biosensor detects MTX in-vitro directly from pharmaceutical sample, undiluted urine and human blood serum samples at a concentration range 5.0×10⁻⁷ M with good recovery limits.

Keywords: amperometry, electrochemical detection, human blood serum, methotrexate, MWCNT, SWV

Procedia PDF Downloads 296
1202 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform

Authors: Omaima N. Ahmad AL-Allaf

Abstract:

Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.

Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform

Procedia PDF Downloads 213
1201 Thriving Organisations: Recommendations to Create a Workplace Culture That Prioritises Both Well-being and Performance Equally

Authors: Clare Victoria Martin

Abstract:

With reports of increased mental health problems and a lack of proactive, consistent well-being initiatives, well-being is a topical issue in the workplace, as well as a wider public health concern. Additionally, workplace well-being is closely linked to performance, both from a business perspective and in psychological research. Businesses are therefore becoming increasingly motivated to promote well-being, yet there are still barriers, including a lack of evidence-based workplace interventions, issues with measuring effectiveness and problems creating lasting cultural change. This review aimed to collate workplace well-being research to propose a comprehensive new model for delivering evidence-based workplace well-being training with a real potential for lasting impact. Method: A narrative review was conducted to meta-synthesise relevant research. Thematic analysis was then adopted as a systematic method of identifying key themes from the review to lead to practical recommendations. Interventions focusing on strengths, psychological capital, mindfulness and positivity (SPMP) dominated the research in this area, suggesting benefits of incorporating all four into training. However, to avoid a ‘quick fix’ mentality, the concept of training ‘well-being ambassadors’ as a preventative counterpart to mental health ‘first aiders’ was proposed alongside a new ‘REST and RISE’ model: well-being interventions should be ‘relatable’, ‘enjoyable’, ‘sociable’ and ‘trackable’ (REST) in order to increase ‘resilience’, ‘innovation’, ‘strengths’ and ‘engagement’ (RISE). If the REST principles are applied to interventions focusing on SPMP, research suggests individuals will RISE. Future research should empirically test this new well-being ambassador programme and REST/RISE model in an applied setting.

Keywords: performance, positive psychology, thriving, workplace well-being

Procedia PDF Downloads 104
1200 Machinability Analysis in Drilling Flax Fiber-Reinforced Polylactic Acid Bio-Composite Laminates

Authors: Amirhossein Lotfi, Huaizhong Li, Dzung Viet Dao

Abstract:

Interest in natural fiber-reinforced composites (NFRC) is progressively growing both in terms of academia research and industrial applications thanks to their abundant advantages such as low cost, biodegradability, eco-friendly nature and relatively good mechanical properties. However, their widespread use is still presumed as challenging because of the specificity of their non-homogeneous structure, limited knowledge on their machinability characteristics and parameter settings, to avoid defects associated with the machining process. The present work is aimed to investigate the effect of the cutting tool geometry and material on the drilling-induced delamination, thrust force and hole quality produced when drilling a fully biodegradable flax/poly (lactic acid) composite laminate. Three drills with different geometries and material were used at different drilling conditions to evaluate the machinability of the fabricated composites. The experimental results indicated that the choice of cutting tool, in terms of material and geometry, has a noticeable influence on the cutting thrust force and subsequently drilling-induced damages. The lower value of thrust force and better hole quality was observed using high-speed steel (HSS) drill, whereas Carbide drill (with point angle of 130o) resulted in the highest value of thrust force. Carbide drill presented higher wear resistance and stability in variation of thrust force with a number of holes drilled, while HSS drill showed the lower value of thrust force during the drilling process. Finally, within the selected cutting range, the delamination damage increased noticeably with feed rate and moderately with spindle speed.

Keywords: natural fiber reinforced composites, delamination, thrust force, machinability

Procedia PDF Downloads 120
1199 Computing Machinery and Legal Intelligence: Towards a Reflexive Model for Computer Automated Decision Support in Public Administration

Authors: Jacob Livingston Slosser, Naja Holten Moller, Thomas Troels Hildebrandt, Henrik Palmer Olsen

Abstract:

In this paper, we propose a model for human-AI interaction in public administration that involves legal decision-making. Inspired by Alan Turing’s test for machine intelligence, we propose a way of institutionalizing a continuous working relationship between man and machine that aims at ensuring both good legal quality and higher efficiency in decision-making processes in public administration. We also suggest that our model enhances the legitimacy of using AI in public legal decision-making. We suggest that case loads in public administration could be divided between a manual and an automated decision track. The automated decision track will be an algorithmic recommender system trained on former cases. To avoid unwanted feedback loops and biases, part of the case load will be dealt with by both a human case worker and the automated recommender system. In those cases an experienced human case worker will have the role of an evaluator, choosing between the two decisions. This model will ensure that the algorithmic recommender system is not compromising the quality of the legal decision making in the institution. It also enhances the legitimacy of using algorithmic decision support because it provides justification for its use by being seen as superior to human decisions when the algorithmic recommendations are preferred by experienced case workers. The paper outlines in some detail the process through which such a model could be implemented. It also addresses the important issue that legal decision making is subject to legislative and judicial changes and that legal interpretation is context sensitive. Both of these issues requires continuous supervision and adjustments to algorithmic recommender systems when used for legal decision making purposes.

Keywords: administrative law, algorithmic decision-making, decision support, public law

Procedia PDF Downloads 200
1198 Fake Accounts Detection in Twitter Based on Minimum Weighted Feature Set

Authors: Ahmed ElAzab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny

Abstract:

Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, then the determined factors have been applied using different classification techniques, a comparison of the results for these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent research in the same area, this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts, moreover, the study can be applied on different Social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.

Keywords: fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques

Procedia PDF Downloads 387
1197 Rapid Classification of Soft Rot Enterobacteriaceae Phyto-Pathogens Pectobacterium and Dickeya Spp. Using Infrared Spectroscopy and Machine Learning

Authors: George Abu-Aqil, Leah Tsror, Elad Shufan, Shaul Mordechai, Mahmoud Huleihel, Ahmad Salman

Abstract:

Pectobacterium and Dickeya spp which negatively affect a wide range of crops are the main causes of the aggressive diseases of agricultural crops. These aggressive diseases are responsible for a huge economic loss in agriculture including a severe decrease in the quality of the stored vegetables and fruits. Therefore, it is important to detect these pathogenic bacteria at their early stages of infection to control their spread and consequently reduce the economic losses. In addition, early detection is vital for producing non-infected propagative material for future generations. The currently used molecular techniques for the identification of these bacteria at the strain level are expensive and laborious. Other techniques require a long time of ~48 h for detection. Thus, there is a clear need for rapid, non-expensive, accurate and reliable techniques for early detection of these bacteria. In this study, infrared spectroscopy, which is a well-known technique with all its features, was used for rapid detection of Pectobacterium and Dickeya spp. at the strain level. The bacteria were isolated from potato plants and tubers with soft rot symptoms and measured by infrared spectroscopy. The obtained spectra were analyzed using different machine learning algorithms. The performances of our approach for taxonomic classification among the bacterial samples were evaluated in terms of success rates. The success rates for the correct classification of the genus, species and strain levels were ~100%, 95.2% and 92.6% respectively.

Keywords: soft rot enterobacteriaceae (SRE), pectobacterium, dickeya, plant infections, potato, solanum tuberosum, infrared spectroscopy, machine learning

Procedia PDF Downloads 86