Search results for: security measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5254

Search results for: security measurement

514 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea

Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park

Abstract:

Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.

Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques

Procedia PDF Downloads 150
513 Examination of Corrosion Durability Related to Installed Environments of Steel Bridges

Authors: Jin-Hee Ahn, Seok-Hyeon Jeon, Young-Bin Lee, Min-Gyun Ha, Yu-Chan Hong

Abstract:

Corrosion durability of steel bridges can be generally affected by atmospheric environments of bridge installation, since corrosion problem is related to environmental factors such as humidity, temperature, airborne salt, chemical components as SO₂, chlorides, etc. Thus, atmospheric environment condition should be measured to estimate corrosion condition of steel bridges as well as measurement of actual corrosion damage of structural members of steel bridge. Even in the same atmospheric environment, the corrosion environment may be different depending on the installation direction of structural members. In this study, therefore, atmospheric corrosion monitoring was conducted using atmospheric corrosion monitoring sensor, hygrometer, thermometer and airborne salt collection device to examine the corrosion durability of steel bridges. As a target steel bridge for corrosion durability monitoring, a cable-stayed bridge with truss steel members was selected. This cable-stayed bridge was located on the coast to connect the islands with the islands. Especially, atmospheric corrosion monitoring was carried out depending on structural direction of a cable-stayed bridge with truss type girders since it consists of structural members with various directions. For atmospheric corrosion monitoring, daily average electricity (corrosion current) was measured at each monitoring members to evaluate corrosion environments and corrosion level depending on structural members with various direction which have different corrosion environment in the same installed area. To compare corrosion durability connected with monitoring data depending on corrosion monitoring members, monitoring steel plate was additionally installed in same monitoring members. Monitoring steel plates of carbon steel was fabricated with dimension of 60mm width and 3mm thickness. And its surface was cleaned for removing rust on the surface by blasting, and its weight was measured before its installation on each structural members. After a 3 month exposure period on real atmospheric corrosion environment at bridge, surface condition of atmospheric corrosion monitoring sensors and monitoring steel plates were observed for corrosion damage. When severe deterioration of atmospheric corrosion monitoring sensors or corrosion damage of monitoring steel plates were found, they were replaced or collected. From 3month exposure tests in the actual steel bridge with various structural member with various direction, the rust on the surface of monitoring steel plate was found, and the difference in the corrosion rate was found depending on the direction of structural member from their visual inspection. And daily average electricity (corrosion current) was changed depending on the direction of structural member. However, it is difficult to identify the relative differences in corrosion durability of steel structural members using short-term monitoring results. After long exposure tests in this corrosion environments, it can be clearly evaluated the difference in corrosion durability depending on installed conditions of steel bridges. Acknowledgements: This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1B03028755).

Keywords: corrosion, atmospheric environments, steel bridge, monitoring

Procedia PDF Downloads 335
512 Comparison of Microstructure, Mechanical Properties and Residual Stresses in Laser and Electron Beam Welded Ti–5Al–2.5Sn Titanium Alloy

Authors: M. N. Baig, F. N. Khan, M. Junaid

Abstract:

Titanium alloys are widely employed in aerospace, medical, chemical, and marine applications. These alloys offer many advantages such as low specific weight, high strength to weight ratio, excellent corrosion resistance, high melting point and good fatigue behavior. These attractive properties make titanium alloys very unique and therefore they require special attention in all areas of processing, especially welding. In this work, 1.6 mm thick sheets of Ti-5Al-2,5Sn, an alpha titanium (α-Ti) alloy, were welded using electron beam (EBW) and laser beam (LBW) welding processes to achieve a full penetration Bead-on Plate (BoP) configuration. The weldments were studied using polarized optical microscope, SEM, EDS and XRD. Microhardness distribution across the weld zone and smooth and notch tensile strengths of the weldments were also recorded. Residual stresses using Hole-drill Strain Measurement (HDSM) method and deformation patterns of the weldments were measured for the purpose of comparison of the two welding processes. Fusion zone widths of both EBW and LBW weldments were found to be approximately equivalent owing to fairly similar high power densities of both the processes. Relatively less oxide content and consequently high joint quality were achieved in EBW weldment as compared to LBW due to vacuum environment and absence of any shielding gas. However, an increase in heat-affected zone width and partial ά-martensitic transformation infusion zone of EBW weldment were observed because of lesser cooling rates associated with EBW as compared with LBW. The microstructure infusion zone of EBW weldment comprised both acicular α and ά martensite within the prior β grains whereas complete ά martensitic transformation was observed within the fusion zone of LBW weldment. Hardness of the fusion zone in EBW weldment was found to be lower than the fusion zone of LBW weldment due to the observed microstructural differences. Notch tensile specimen of LBW exhibited higher load capacity, ductility, and absorbed energy as compared with EBW specimen due to the presence of high strength ά martensitic phase. It was observed that the sheet deformation and deformation angle in EBW weldment were more than LBW weldment due to relatively more heat retention in EBW which led to more thermal strains and hence higher deformations and deformation angle. The lowest residual stresses were found in LBW weldments which were tensile in nature. This was owing to high power density and higher cooling rates associated with LBW process. EBW weldment exhibited highest compressive residual stresses due to which the service life of EBW weldment is expected to improve.

Keywords: Laser and electron beam welding, Microstructure and mechanical properties, Residual stress and distortions, Titanium alloys

Procedia PDF Downloads 200
511 A Study on Impact of Scheduled Preventive Maintenance on Overall Self-Life as Well as Reduction of Operational down Time of Critical Oil Field Mobile Equipment

Authors: Dipankar Deka

Abstract:

Exploration and production of Oil & Gas is a very challenging business on which a nation’s energy security depends on. The exploration and Production of hydrocarbon is a very precise and time-bound process. The striking rate of hydrocarbon in a drilled well is so uncertain that the success rate is only 31% in 2021 as per Rigzone. Huge cost is involved in drilling as well as the production of hydrocarbon from a well. Due to this very reason, no one can effort to lose a well because of faulty machines, which increases the non-productive time (NPT). Numerous activities that include manpower and machines synchronized together works in a precise way to complete the full cycle of exploration, rig movement, drilling and production of crude oil. There are several machines, both fixed and mobile, are used in the complete cycle. Most of these machines have a tight schedule of work operating in various drilling sites that are simultaneously being drilled, providing a very narrow window for maintenance. The shutdown of any of these machines for even a small period of time delays the whole project and increases the cost of production of hydrocarbon by manifolds. Moreover, these machines are custom designed exclusively for oil field operations to be only used in Mining Exploration Licensed area (MEL) earmarked by the government and are imported and very costly in nature. The cost of some of these mobile units like Well Logging Units, Coil Tubing units, Nitrogen pumping units etc. that are used for Well stimulation and activation process exceeds more than 1 million USD per unit. So the increase of self-life of these units also generates huge revenues during the extended duration of their services. In this paper we are considering the very critical mobile oil field equipment like Well Logging Unit, Coil Tubing unit, well-killing unit, Nitrogen pumping unit, MOL Oil Field Truck, Hot Oil Circulation Unit etc., and their extensive preventive maintenance in our auto workshop. This paper is the outcome of 10 years of structured automobile maintenance and minute documentation of each associated event that allowed us to perform the comparative study between the new practices of preventive maintenance over the age-old practice of system-based corrective maintenance and its impact on the self-life of the equipment.

Keywords: automobile maintenance, preventive maintenance, symptom based maintenance, workshop technologies

Procedia PDF Downloads 62
510 A Comparative Study of the Tribological Behavior of Bilayer Coatings for Machine Protection

Authors: Cristina Diaz, Lucia Perez-Gandarillas, Gonzalo Garcia-Fuentes, Simone Visigalli, Roberto Canziani, Giuseppe Di Florio, Paolo Gronchi

Abstract:

During their lifetime, industrial machines are often subjected to chemical, mechanical and thermal extreme conditions. In some cases, the loss of efficiency comes from the degradation of the surface as a result of its exposition to abrasive environments that can cause wear. This is a common problem to be solved in industries of diverse nature such as food, paper or concrete industries, among others. For this reason, a good selection of the material is of high importance. In the machine design context, stainless steels such as AISI 304 and 316 are widely used. However, the severity of the external conditions can require additional protection for the steel and sometimes coating solutions are demanded in order to extend the lifespan of these materials. Therefore, the development of effective coatings with high wear resistance is of utmost technological relevance. In this research, bilayer coatings made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium, and Titanium-Zirconium have been developed using magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology. Their tribological behavior has been measured and evaluated under different environmental conditions. Two kinds of steels were used as substrates: AISI 304, AISI 316. For the comparison with these materials, titanium alloy substrate was also employed. Regarding the characterization, wear rate and friction coefficient were evaluated by a tribo-tester, using a pin-on-ball configuration with different lubricants such as tomato sauce, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl to approximate the results to real extreme conditions. In addition, topographical images of the wear tracks were obtained in order to get more insight of the wear behavior and scanning electron microscope (SEM) images were taken to evaluate the adhesion and quality of the coating. The characterization was completed with the measurement of nanoindentation hardness and elastic modulus. Concerning the results, thicknesses of the samples varied from 100 nm (Ti-Zr layer) to 1.4 µm (Ti-Hf layer) and SEM images confirmed that the addition of the Ti layer improved the adhesion of the coatings. Moreover, results have pointed out that these coatings have increased the wear resistance in comparison with the original substrates under environments of different severity. Furthermore, nanoindentation hardness results showed an improvement of the elastic strain to failure and a high modulus of elasticity (approximately 200 GPa). As a conclusion, Ti-Ta, Ti-Zr, Ti-Nb, and Ti-Hf are very promising and effective coatings in terms of tribological behavior, improving considerably the wear resistance and friction coefficient of typically used machine materials.

Keywords: coating, stainless steel, tribology, wear

Procedia PDF Downloads 131
509 Diagnostic Performance of Mean Platelet Volume in the Diagnosis of Acute Myocardial Infarction: A Meta-Analysis

Authors: Kathrina Aseanne Acapulco-Gomez, Shayne Julieane Morales, Tzar Francis Verame

Abstract:

Mean platelet volume (MPV) is the most accurate measure of the size of platelets and is routinely measured by most automated hematological analyzers. Several studies have shown associations between MPV and cardiovascular risks and outcomes. Although its measurement may provide useful data, MPV remains to be a diagnostic tool that is yet to be included in routine clinical decision making. The aim of this systematic review and meta-analysis is to determine summary estimates of the diagnostic accuracy of mean platelet volume for the diagnosis of myocardial infarction among adult patients with angina and/or its equivalents in terms of sensitivity, specificity, diagnostic odds ratio, and likelihood ratios, and to determine the difference of the mean MPV values between those with MI and those in the non-MI controls. The primary search was done through search in electronic databases PubMed, Cochrane Review CENTRAL, HERDIN (Health Research and Development Information Network), Google Scholar, Philippine Journal of Pathology, and Philippine College of Physicians Philippine Journal of Internal Medicine. The reference list of original reports was also searched. Cross-sectional, cohort, and case-control articles studying the diagnostic performance of mean platelet volume in the diagnosis of acute myocardial infarction in adult patients were included in the study. Studies were included if: (1) CBC was taken upon presentation to the ER or upon admission (within 24 hours of symptom onset); (2) myocardial infarction was diagnosed with serum markers, ECG, or according to accepted guidelines by the Cardiology societies (American Heart Association (AHA), American College of Cardiology (ACC), European Society of Cardiology (ESC); and, (3) if outcomes were measured as significant difference AND/OR sensitivity and specificity. The authors independently screened for inclusion of all the identified potential studies as a result of the search. Eligible studies were appraised using well-defined criteria. Any disagreement between the reviewers was resolved through discussion and consensus. The overall mean MPV value of those with MI (9.702 fl; 95% CI 9.07 – 10.33) was higher than in those of the non-MI control group (8.85 fl; 95% CI 8.23 – 9.46). Interpretation of the calculated t-value of 2.0827 showed that there was a significant difference in the mean MPV values of those with MI and those of the non-MI controls. The summary sensitivity (Se) and specificity (Sp) for MPV were 0.66 (95% CI; 0.59 - 0.73) and 0.60 (95% CI; 0.43 – 0.75), respectively. The pooled diagnostic odds ratio (DOR) was 2.92 (95% CI; 1.90 – 4.50). The positive likelihood ratio of MPV in the diagnosis of myocardial infarction was 1.65 (95% CI; 1.20 – 22.27), and the negative likelihood ratio was 0.56 (95% CI; 0.50 – 0.64). The intended role for MPV in the diagnostic pathway of myocardial infarction would perhaps be best as a triage tool. With a DOR of 2.92, MPV values can discriminate between those who have MI and those without. For a patient with angina presenting with elevated MPV values, it is 1.65 times more likely that he has MI. Thus, it is implied that the decision to treat a patient with angina or its equivalents as a case of MI could be supported by an elevated MPV value.

Keywords: mean platelet volume, MPV, myocardial infarction, angina, chest pain

Procedia PDF Downloads 63
508 Fabrication of SnO₂ Nanotube Arrays for Enhanced Gas Sensing Properties

Authors: Hsyi-En Cheng, Ying-Yi Liou

Abstract:

Metal-oxide semiconductor (MOS) gas sensors are widely used in the gas-detection market due to their high sensitivity, fast response, and simple device structures. However, the high working temperature of MOS gas sensors makes them difficult to integrate with the appliance or consumer goods. One-dimensional (1-D) nanostructures are considered to have the potential to lower their working temperature due to their large surface-to-volume ratio, confined electrical conduction channels, and small feature sizes. Unfortunately, the difficulty of fabricating 1-D nanostructure electrodes has hindered the development of low-temperature MOS gas sensors. In this work, we proposed a method to fabricate nanotube-arrays, and the SnO₂ nanotube-array sensors with different wall thickness were successfully prepared and examined. The fabrication of SnO₂ nanotube arrays incorporates the techniques of barrier-free anodic aluminum oxide (AAO) template and atomic layer deposition (ALD) of SnO₂. First, 1.0 µm Al film was deposited on ITO glass substrate by electron beam evaporation and then anodically oxidized by five wt% phosphoric acid solution at 5°C under a constant voltage of 100 V to form porous aluminum oxide. As the Al film was fully oxidized, a 15 min over anodization and a 30 min post chemical dissolution were used to remove the barrier oxide at the bottom end of pores to generate a barrier-free AAO template. The ALD using reactants of TiCl4 and H₂O was followed to grow a thin layer of SnO₂ on the template to form SnO₂ nanotube arrays. After removing the surface layer of SnO₂ by H₂ plasma and dissolving the template by 5 wt% phosphoric acid solution at 50°C, upright standing SnO₂ nanotube arrays on ITO glass were produced. Finally, Ag top electrode with line width of 5 μm was printed on the nanotube arrays to form SnO₂ nanotube-array sensor. Two SnO₂ nanotube-arrays with wall thickness of 30 and 60 nm were produced in this experiment for the evaluation of gas sensing ability. The flat SnO₂ films with thickness of 30 and 60 nm were also examined for comparison. The results show that the properties of ALD SnO₂ films were related to the deposition temperature. The films grown at 350°C had a low electrical resistivity of 3.6×10-3 Ω-cm and were, therefore, used for the nanotube-array sensors. The carrier concentration and mobility of the SnO₂ films were characterized by Ecopia HMS-3000 Hall-effect measurement system and were 1.1×1020 cm-3 and 16 cm3/V-s, respectively. The electrical resistance of SnO₂ film and nanotube-array sensors in air and in a 5% H₂-95% N₂ mixture gas was monitored by Pico text M3510A 6 1/2 Digits Multimeter. It was found that, at 200 °C, the 30-nm-wall SnO₂ nanotube-array sensor performs the highest responsivity to 5% H₂, followed by the 30-nm SnO₂ film sensor, the 60-nm SnO₂ film sensor, and the 60-nm-wall SnO₂ nanotube-array sensor. However, at temperatures below 100°C, all the samples were insensitive to the 5% H₂ gas. Further investigation on the sensors with thinner SnO₂ is necessary for improving the sensing ability at temperatures below 100 °C.

Keywords: atomic layer deposition, nanotube arrays, gas sensor, tin dioxide

Procedia PDF Downloads 223
507 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations

Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh

Abstract:

Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.

Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy

Procedia PDF Downloads 77
506 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 138
505 Transformation of ectA Gene From Halomonas elongata in Tomato Plant

Authors: Narayan Moger, Divya B., Preethi Jambagi, Krishnaveni C. K., Apsana M. R., B. R. Patil, Basvaraj Bagewadi

Abstract:

Salinity is one of the major threats to world food security. Considering the requirement for salt tolerant crop plants in the present study was undertaken to clone and transferred the salt tolerant ectA gene from marine ecosystem into agriculture crop system to impart salinity tolerance. Ectoine is the compatible solute which accumulates in the cell membrane, is known to be involved in salt tolerance activity in most of the Halophiles. The present situation is insisting to development of salt tolerant transgenic lines to combat abiotic stress. In this background, the investigation was conducted to develop transgenic tomato lines by cloning and transferring of ectA gene is an ectoine derivative capable of enzymatic action for the production of acetyl-diaminobutyric acid. The gene ectA is involved in maintaining the osmotic balance of plants. The PCR amplified ectA gene (579bp) was cloned into T/A cloning vector (pTZ57R/T). The construct pDBJ26 containing ectA gene was sequenced by using gene specific forward and reverse primers. Sequence was analyzed using BLAST algorithm to check similarity of ectA gene with other isolates. Highest homology of 99.66 per cent was found with ectA gene sequences of isolates Halomonas elongata with the available sequence information in NCBI database. The ectA gene was further sub cloned into pRI101-AN plant expression vector and transferred into E. coli DH5α for its maintenance. Further pDNM27 was mobilized into A. tumefaciens LBA4404 through tri-parental mating system. The recombinant Agrobacterium containing pDNM27 was transferred into tomato plants through In planta plant transformation method. Out of 300 seedlings, co-cultivated only twenty-seven plants were able to well establish under the greenhouse condition. Among twenty-seven transformants only twelve plants showed amplification with gene specific primers. Further work must be extended to evaluate the transformants at T1 and T2 generations for ectoine accumulation, salinity tolerance, plant growth and development and yield.

Keywords: salinity, computable solutes, ectA, transgenic, in planta transformation

Procedia PDF Downloads 63
504 Robots for City Life: Design Guidelines and Strategy Recommendations for Introducing Robots in Cities

Authors: Akshay Rege, Lara Gomaa, Maneesh Kumar Verma, Sem Carree

Abstract:

The aim of this paper is to articulate design strategies and recommendations for introducing robots into the city life of people based on experiments conducted with robots and semi-autonomous systems in three cities in the Netherlands. This research was carried out by the Spot robotics team of Impact Lab housed within YES!Delft, a start-up accelerator located in Delft, The Netherlands. The premise of this research is to inform the development of the ‘region of the future’ by the Municipality of Rotterdam-Den Haag (MRDH). The paper starts by reporting the desktop research carried out to find and develop multiple use cases for robots to support humans in various activities. Further, the paper reports the user research carried out by crowdsourcing responses collected in public spaces of Rotterdam-Den Haag region and on the internet. Furthermore, based on the knowledge gathered in the initial research, practical experiments were carried out using robots and semi-autonomous systems in order to test and validate our initial research. These experiments were conducted in three cities in the Netherlands which were Rotterdam, The Hague, and Delft. Custom sensor box, Drone, and Boston Dynamics' Spot robot were used to conduct these experiments. Out of thirty use cases, five were tested with experiments which were skyscraper emergency evacuation, human transportation and security, bike lane delivery, mobility tracking, and robot drama. The learnings from these experiments provided us with insights into human-robot interaction and symbiosis in cities which can be used to introduce robots in cities to support human activities, ultimately enabling the transitioning from a human only city life towards a blended one where robots can play a role. Based on these understandings, we formulated design guidelines and strategy recommendations for incorporating robots in the Rotterdam-Den Haag’s region of the future. Lastly, we discuss how our insights in the Rotterdam-Den Haag region can inspire and inform the incorporation of robots in different cities of the world.

Keywords: city life, design guidelines, human-robot Interaction, robot use cases, robotic experiments, strategy recommendations, user research

Procedia PDF Downloads 75
503 The Impact of Emotional Intelligence on Organizational Performance

Authors: El Ghazi Safae, Cherkaoui Mounia

Abstract:

Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.

Keywords: emotions, performance, intelligence, firms

Procedia PDF Downloads 89
502 Management as a Proxy for Firm Quality

Authors: Petar Dobrev

Abstract:

There is no agreed-upon definition of firm quality. While profitability and stock performance often qualify as popular proxies of quality, in this project, we aim to identify quality without relying on a firm’s financial statements or stock returns as selection criteria. Instead, we use firm-level data on management practices across small to medium-sized U.S. manufacturing firms from the World Management Survey (WMS) to measure firm quality. Each firm in the WMS dataset is assigned a mean management score from 0 to 5, with higher scores identifying better-managed firms. This management score serves as our proxy for firm quality and is the sole criteria we use to separate firms into portfolios comprised of high-quality and low-quality firms. We define high-quality (low-quality) firms as those firms with a management score of one standard deviation above (below) the mean. To study whether this proxy for firm quality can identify better-performing firms, we link this data to Compustat and The Center for Research in Security Prices (CRSP) to obtain firm-level data on financial performance and monthly stock returns, respectively. We find that from 1999 to 2019 (our sample data period), firms in the high-quality portfolio are consistently more profitable — higher operating profitability and return on equity compared to low-quality firms. In addition, high-quality firms also exhibit a lower risk of bankruptcy — a higher Altman Z-score. Next, we test whether the stocks of the firms in the high-quality portfolio earn superior risk-adjusted excess returns. We regress the monthly excess returns on each portfolio on the Fama-French 3-factor, 4-factor, and 5-factor models, the betting-against-beta factor, and the quality-minus-junk factor. We find no statistically significant differences in excess returns between both portfolios, suggesting that stocks of high-quality (well managed) firms do not earn superior risk-adjusted returns compared to low-quality (poorly managed) firms. In short, our proxy for firm quality, the WMS management score, can identify firms with superior financial performance (higher profitability and reduced risk of bankruptcy). However, our management proxy cannot identify stocks that earn superior risk-adjusted returns, suggesting no statistically significant relationship between managerial quality and stock performance.

Keywords: excess stock returns, management, profitability, quality

Procedia PDF Downloads 76
501 Shear Strength Envelope Characteristics of LimeTreated Clays

Authors: Mohammad Moridzadeh, Gholamreza Mesri

Abstract:

The effectiveness of lime treatment of soils has been commonly evaluated in terms of improved workability and increased undrained unconfined compressive strength in connection to road and airfield construction. The most common method of strength measurement has been the unconfined compression test. However, if the objective of lime treatment is to improve long-term stability of first-time or reactivated landslides in stiff clays and shales, permanent changes in the size and shape of clay particles must be realized to increase drained frictional resistance. Lime-soil interactions that may produce less platy and larger soil particles begin and continue with time under the highly alkaline pH environment. In this research, pH measurements are used to monitor chemical environment and progress of reactions. Atterberg limits are measured to identify changes in particle size and shape indirectly. Also, fully softened and residual strength measurements are used to examine an improvement in frictional resistance due to lime-soil interactions. The main variables are soil plasticity and mineralogy, lime content, water content, and curing period. Lime effect on frictional resistance is examined using samples of clays with different mineralogy and characteristics which may react with lime to various extents. Drained direct shear tests on reconstituted lime-treated clay specimens with various properties have been performed to measure fully softened shear strength. To measure residual shear strength, drained multiple reversal direct shear tests on precut specimens were conducted. This way, soil particles are oriented along the direction of shearing to the maximum possible extent and provide minimum frictional resistance. This is applicable to reactivated and part of first-time landslides. The Brenna clay, which is the highly plastic lacustrine clay of Lake Agassiz causing slope instability along the banks of the Red River, is one of the soil samples used in this study. The Brenna Formation characterized as a uniform, soft to firm, dark grey, glaciolacustrine clay with little or no visible stratification, is full of slickensided surfaces. The major source of sediment for the Brenna Formation was the highly plastic montmorillonitic Pierre Shale bedrock. The other soil used in this study is one of the main sources of slope instability in Harris County Flood Control District (HCFCD), i.e. the Beaumont clay. The shear strengths of untreated and treated clays were obtained under various normal pressures to evaluate the shear envelope nonlinearity.

Keywords: Brenna clay, friction resistance, lime treatment, residual

Procedia PDF Downloads 140
500 Quantum Mechanics as A Limiting Case of Relativistic Mechanics

Authors: Ahmad Almajid

Abstract:

The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.

Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics

Procedia PDF Downloads 60
499 A Fast Method for Graphene-Supported Pd-Co Nanostructures as Catalyst toward Ethanol Oxidation in Alkaline Media

Authors: Amir Shafiee Kisomi, Mehrdad Mofidi

Abstract:

Nowadays, fuel cells as a promising alternative for power source have been widely studied owing to their security, high energy density, low operation temperatures, renewable capability and low environmental pollutant emission. The nanoparticles of core-shell type could be widely described in a combination of a shell (outer layer material) and a core (inner material), and their characteristics are greatly conditional on dimensions and composition of the core and shell. In addition, the change in the constituting materials or the ratio of core to the shell can create their special noble characteristics. In this study, a fast technique for the fabrication of a Pd-Co/G/GCE modified electrode is offered. Thermal decomposition reaction of cobalt (II) formate salt over the surface of graphene/glassy carbon electrode (G/GCE) is utilized for the synthesis of Co nanoparticles. The nanoparticles of Pd-Co decorated on the graphene are created based on the following method: (1) Thermal decomposition reaction of cobalt (II) formate salt and (2) the galvanic replacement process Co by Pd2+. The physical and electrochemical performances of the as-prepared Pd-Co/G electrocatalyst are studied by Field Emission Scanning Electron Microscopy (FESEM), Energy Dispersive X-ray Spectroscopy (EDS), Cyclic Voltammetry (CV), and Chronoamperometry (CHA). Galvanic replacement method is utilized as a facile and spontaneous approach for growth of Pd nanostructures. The Pd-Co/G is used as an anode catalyst for ethanol oxidation in alkaline media. The Pd-Co/G not only delivered much higher current density (262.3 mAcm-2) compared to the Pd/C (32.1 mAcm-2) catalyst, but also demonstrated a negative shift of the onset oxidation potential (-0.480 vs -0.460 mV) in the forward sweep. Moreover, the novel Pd-Co/G electrocatalyst represents large electrochemically active surface area (ECSA), lower apparent activation energy (Ea), higher levels of durability and poisoning tolerance compared to the Pd/C catalyst. The paper demonstrates that the catalytic activity and stability of Pd-Co/G electrocatalyst are higher than those of the Pd/C electrocatalyst toward ethanol oxidation in alkaline media.

Keywords: thermal decomposition, nanostructures, galvanic replacement, electrocatalyst, ethanol oxidation, alkaline media

Procedia PDF Downloads 134
498 Identifying the Effects of the COVID-19 Pandemic on Syrian and Congolese Refugees’ Health and Economic Access in Central Pennsylvania

Authors: Mariam Shalaby, Kayla Krause, Raisha Ismail, Daniel George

Abstract:

Introduction: The Pennsylvania State College of Medicine Refugee Initiative is a student-run organization that works with eleven Syrian and Congolese refugee families. Since 2016, it has used grant funding to make weekly produce purchases at a local market, provide tutoring services, and develop trusting relationships. This case study explains how the Refugee Initiative shifted focus to face new challenges due to the COVID-19 pandemic in 2020. Methodology: When refugees who had previously attained stability found themselves unable to pay the bills, the organization shifted focus from food security to direct assistance such as applying for unemployment compensation since many had recently lost jobs. When refugee families additionally struggled to access hygiene supplies, funding was redirected to purchase them. Funds were also raised from the community to provide financial relief from unpaid rent and bills. Findings: Systemic challenges were encountered in navigating federal/state unemployment and social welfare systems, and there was a conspicuous absence of affordable, language-accessible assistance that could help refugees. Finally, as struggling public schools failed to maintain adequate English as a Second Language (ESL) education, the group’s tutoring services were hindered by social distancing and inconsistent access to distance-learning platforms. Conclusion: Ultimately, the pandemic highlighted that a charity-based arrangement is helpful but not sustainable, and challenges persist for refugee families. Based on the Refugee Initiative's experiences over the past year of the COVID-19 pandemic, several needs must be addressed to aid refugee families at this time, including: increased access to affordable and language-accessible social services, educational resources, and simpler options for grant-based financial assistance. Interventions to increase these resources will aid refugee families in need in Central Pennsylvania and internationally

Keywords: COVID-19, health, pandemic, refugees

Procedia PDF Downloads 103
497 Aerosol Characterization in a Coastal Urban Area in Rimini, Italy

Authors: Dimitri Bacco, Arianna Trentini, Fabiana Scotto, Flavio Rovere, Daniele Foscoli, Cinzia Para, Paolo Veronesi, Silvia Sandrini, Claudia Zigola, Michela Comandini, Marilena Montalti, Marco Zamagni, Vanes Poluzzi

Abstract:

The Po Valley, in the north of Italy, is one of the most polluted areas in Europe. The air quality of the area is linked not only to anthropic activities but also to its geographical characteristics and stagnant weather conditions with frequent inversions, especially in the cold season. Even the coastal areas present high values of particulate matter (PM10 and PM2.5) because the area closed between the Adriatic Sea and the Apennines does not favor the dispersion of air pollutants. The aim of the present work was to identify the main sources of particulate matter in Rimini, a tourist city in northern Italy. Two sampling campaigns were carried out in 2018, one in winter (60 days) and one in summer (30 days), in 4 sites: an urban background, a city hotspot, a suburban background, and a rural background. The samples are characterized by the concentration of the ionic composition of the particulates and of the main a hydro-sugars, in particular levoglucosan, a marker of the biomass burning, because one of the most important anthropogenic sources in the area, both in the winter and surprisingly even in the summer, is the biomass burning. Furthermore, three sampling points were chosen in order to maximize the contribution of a specific biomass source: a point in a residential area (domestic cooking and domestic heating), a point in the agricultural area (weed fires), and a point in the tourist area (restaurant cooking). In these sites, the analyzes were enriched with the quantification of the carbonaceous component (organic and elemental carbon) and with measurement of the particle number concentration and aerosol size distribution (6 - 600 nm). The results showed a very significant impact of the combustion of biomass due to domestic heating in the winter period, even though many intense peaks were found attributable to episodic wood fires. In the summer season, however, an appreciable signal was measured linked to the combustion of biomass, although much less intense than in winter, attributable to domestic cooking activities. Further interesting results were the verification of the total absence of sea salt's contribution in the particulate with the lower diameter (PM2.5), and while in the PM10, the contribution becomes appreciable only in particular wind conditions (high wind from north, north-east). Finally, it is interesting to note that in a small town, like Rimini, in summer, the traffic source seems to be even more relevant than that measured in a much larger city (Bologna) due to tourism.

Keywords: aerosol, biomass burning, seacoast, urban area

Procedia PDF Downloads 109
496 Verification of Low-Dose Diagnostic X-Ray as a Tool for Relating Vital Internal Organ Structures to External Body Armour Coverage

Authors: Natalie A. Sterk, Bernard van Vuuren, Petrie Marais, Bongani Mthombeni

Abstract:

Injuries to the internal structures of the thorax and abdomen remain a leading cause of death among soldiers. Body armour is a standard issue piece of military equipment designed to protect the vital organs against ballistic and stab threats. When configured for maximum protection, the excessive weight and size of the armour may limit soldier mobility and increase physical fatigue and discomfort. Providing soldiers with more armour than necessary may, therefore, hinder their ability to react rapidly in life-threatening situations. The capability to determine the optimal trade-off between the amount of essential anatomical coverage and hindrance on soldier performance may significantly enhance the design of armour systems. The current study aimed to develop and pilot a methodology for relating internal anatomical structures with actual armour plate coverage in real-time using low-dose diagnostic X-ray scanning. Several pilot scanning sessions were held at Lodox Systems (Pty) Ltd head-office in South Africa. Testing involved using the Lodox eXero-dr to scan dummy trunk rigs at various degrees and heights of measurement; as well as human participants, wearing correctly fitted body armour while positioned in supine, prone shooting, seated and kneeling shooting postures. The verification of sizing and metrics obtained from the Lodox eXero-dr were then confirmed through a verification board with known dimensions. Results indicated that the low-dose diagnostic X-ray has the capability to clearly identify the vital internal structures of the aortic arch, heart, and lungs in relation to the position of the external armour plates. Further testing is still required in order to fully and accurately identify the inferior liver boundary, inferior vena cava, and spleen. The scans produced in the supine, prone, and seated postures provided superior image quality over the kneeling posture. The X-ray-source and-detector distance from the object must be standardised to control for possible magnification changes and for comparison purposes. To account for this, specific scanning heights and angles were identified to allow for parallel scanning of relevant areas. The low-dose diagnostic X-ray provides a non-invasive, safe, and rapid technique for relating vital internal structures with external structures. This capability can be used for the re-evaluation of anatomical coverage required for essential protection while optimising armour design and fit for soldier performance.

Keywords: body armour, low-dose diagnostic X-ray, scanning, vital organ coverage

Procedia PDF Downloads 104
495 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 85
494 Managing Maritime Security in the Mediterranean Sea: The Roles of the EU in Tackling Irregular Migration

Authors: Shazwanis Shukri

Abstract:

The Mediterranean Sea, at the crossroads of three continents has always been the focus of pan-European and worldwide attention. Over the past decade, the Mediterranean Sea has become a hotbed for irregular migration particularly from the African continent toward the Europe. Among the major transit routes in the Mediterranean Sea include the Strait of Gibraltar, Canary Island and island of Lampedusa. In recent years, Mediterranean Sea has witnessed significant numbers of accidents and shipwrecks involving the irregular migrants and refugees trying to reach Europe via the sea. The shipwrecks and traffickers exploitation of migrants draw most of the attention particularly for the European Union (EU). This incident has been a wakeup call for the EU and become the top political agenda in the EU policy to tackle irregular migration and human smuggling at sea. EU has repeatedly addressed irregular migration as one of the threats the EU and its citizens may be confronted with and therefore immediate measures are crucial to tackle the crisis. In light of this, various initiatives have been adopted by the EU to strengthen external border control and restrict access to irregular migrants, notably through the enforcement of Frontex and Eunavfor Med. This paper analyses current development of counter-migration operations by the EU in response to migration crisis in the Mediterranean Sea. The analysis is threefold. First, this study examines the patterns and trends of irregular migration’s movements from recent perspective. Second, this study concentrates on the evolution of the EU operations that are in place in the Mediterranean Sea, notably by Frontex and Eunavfor Med to curb the influx of irregular migrants to the European countries, including, among others, Greece and Italy. Third, this study investigates the EU approaches to fight against the proliferation of human trafficking networks at sea. This study is essential to determine the roles of the EU in tackling migration crisis and human trafficking in the Mediterranean Sea and the effectiveness of their counter-migration operations to reduce the number of irregular migrants travelling via the sea. Elite interviews and document analysis were used as a methodology in this study. The study discovers that the EU operations have successfully contributed to reduce the numbers of irregular migrant’s arrival to Europe. The study also shows that the operations were effective to disrupt smugglers business models particularly from Libya. This study provides essential understanding about the roles of the EU not limited to tackle the migration crisis and disrupt trafficking networks, but also pledged to prevent further loss of lives at sea.

Keywords: European union, frontex, irregular migration, Mediterranean sea

Procedia PDF Downloads 308
493 Global and Domestic Response to Boko Haram Terrorism on Cameroon 2014-2018

Authors: David Nchinda Keming

Abstract:

The present study is focused on both the national and international collective fight against Boko Haram terrorism on Cameroon and the rule played by the Lake Chad Basin Countries (LCBCs) and the global community to suffocate the sect’s activities in the region. Although countries of the Lake Chad Basin include: Cameroon, Chad, Nigeria and Niger others like Benin also joined the course. The justification for the internationalisation of the fight against Boko Haram could be explained by the ecological and international climatic importance of the Lake Chad and the danger posed by the sect not only to the Lake Chad member countries but to global armed, civil servants and the international political economy. The study, therefore, kick start with Cameroon’s reaction to Boko Haram’s terrorist attacks on its territory. It further expounds on Cameroon’s request on bilateral diplomacy from members of the UN Security Council for an international collective support to staple the winds of the challenging sect. The study relies on the hypothesis that Boko Haram advanced terrorism on Cameroon was more challenging to the domestic military intelligence thus forcing the government to seek for bilateral and multilateral international collective support to secure its territory from the powerful sect. This premise is tested internationally via (multilateral cooperation, bilateral response, regional cooperation) and domestically through (solidarity parade, religious discourse, political manifestations, war efforts, the vigilantes and the way forward). To accomplish our study, we made used of the mixed research methodologies to interpret the primary, secondary and tertiary sources consulted. Our results reveal that the collective response was effectively positive justified by the drastic drop in the sect’s operations in Cameroon and the whole LCBCs. Although the sect was incapacitated, terrorism remains an international malaise and Cameroon hosts a fertile ground for terrorists’ activism. Boko Haram was just weakened and not completely defeated and could reappear someday even under a different appellation. Therefore, to absolutely eradicate terrorism in general and Boko Haram in particular, LCBCs must improve their military intelligence on terrorism and continue to collaborate with advanced experienced countries in fighting terrorism.

Keywords: Boko Haram, terrorism, domestic, international, response

Procedia PDF Downloads 136
492 Pharmacological Mechanisms of an Indolic Compound in Chemoprevention of Colonic Acf Formation in Azoxymethane-Induced Colon Cancer Rat Model and Cell Lines

Authors: Nima Samie, Sekaran Muniandy, Zahurin Mohamed, M. S. Kanthimathi

Abstract:

Although number of indole containing compounds have been reported to have anticancer properties in vitro but only a few of them show potential as anticancer compounds in vivo. The current study was to evaluate the mechanism of cytotoxicity of selected indolic compound in vivo and in vitro. In this context, we determined the potency of the compound in the induction of apoptosis, cell cycle arrest, and cytoskeleton rearrangement. HT-29, WiDr, CCD-18Co, human monocyte/macrophage CRL-9855, and B lymphocyte CCL-156 cell lines were used to determine the IC50 of the compound using the MTT assay. Analysis of apoptosis was carried out using immunofluorescence, acridine orange/ propidium iodide double staining, Annexin-V-FITC assay, evaluation of the translocation of NF-kB, oxygen radical antioxidant capacity, quenching of reactive oxygen species content, measurement of LDH release, caspase-3/-7, -8 and -9 assays and western blotting. The cell cycle arrest was examined using flowcytometry and gene expression was assessed using qPCR array. Results displayed a potent suppressive effect on HT-29 and WiDr after 24 h of treatment with IC50 value of 2.52±0.34 µg/ml and 2.13±0.65 µg/ml respectively. This cytotoxic effect on normal, monocyte/macrophage and B-cells was insignificant. Dipping in the mitochondrial membrane potential and increased release of cytochrome c from the mitochondria indicated induction of the intrinsic apoptosis pathway by the compound. Activation of this pathway was further evidenced by significant activation of caspase-9 and 3/7. The compound was also shown to activate the extrinsic pathways of apoptosis via activation of caspase-8 which is linked to the suppression of NF-kB translocation to the nucleus. Cell cycle arrest in the G1 phase and up-regulation of glutathione reductase, based on excessive ROS production were also observed. These findings were further investigated for inhibitory efficiency of the compound on colonic aberrant crypt foci in male rats. Rats were divided in to 5 groups: vehicle, cancer control, positive control groups and the groups treated with 25 and 50 mg/kg of compounds for 10 weeks. Administration of compound suppressed total colonic ACF formation up to 73.4%. The results also showed that treatment with the compound significantly reduced the level of malondialdehyde while increasing superoxide dismutase and catalase activities. Furthermore, the down-regulation of PCNA and Bcl2 and the up-regulation of Bax was confirmed by immunohistochemical staining. The outcome of this study suggest sthat the indolic compound is a potent anti-cancer agent against colon cancer and can be further evaluated by animal trial.

Keywords: indolic compound, chemoprevention, crypt, azoxymethane, colon cancer

Procedia PDF Downloads 333
491 Ecosystem Services and Human Well-Being: Case Study of Tiriya Village, Bastar India

Authors: S. Vaibhav Kant Sahu, Surabhi Bipin Seth

Abstract:

Human well-being has multiple constituents including the basic material for a good life, freedom and choice, health, good social relations, and security. Poverty is also multidimensional and has been defined as the pronounced deprivation of well-being. Dhurwa tribe of Bastar (India) have symbiotic relation with nature, it provisions ecosystem service such as food, fuel and fiber; regulating services such as climate regulation and non-material benefits such as spiritual or aesthetic benefits and they are managing their forest from ages. The demand for ecosystem services is now so great that trade-off among services become rule. Aim of study to explore evidences for linkages between ecosystem services and well-being of indigenous community, how much it helps them in poverty reduction and interaction between them. Objective of study was to find drivers of change and evidence concerning link between ecosystem, human development and sustainability, evidence in decision making does it opt for multi sectoral objectives. Which means human well-being as the central focus for assessment, while recognizing that biodiversity and ecosystems also have intrinsic value. Ecosystem changes that may have little impact on human well-being over days or weeks may have pronounced impacts over years or decades; so assessments needed to be conducted at spatial and temporal scales under social, political, economic scales to have high-resolution data. Researcher used framework developed by Millennium ecosystem assessment; since human action now directly or unknowingly virtually alter ecosystem. Researcher used ethnography study to get primary qualitative data, secondary data collected from panchayat office. The responses were transcribed and translated into English, as interview held in Hindi and local indigenous language. Focus group discussion were held with group of 10 women at Tiriya village. Researcher concluded with well-being is not just gap between ecosystem service supply but also increases vulnerability. Decision can have consequences external to the decision framework these consequences are called externalities because they are not part of the decision-making calculus.

Keywords: Bastar, Dhurwa tribe, ecosystem services, millennium ecosystem assessment, sustainability

Procedia PDF Downloads 281
490 Export and Import Indicators of Georgian Agri-food Products during the Pandemic: Challenges and Opportunities

Authors: Eteri Kharaishvili

Abstract:

Introduction. The paper analyzes the main indicators of export and import of Georgian agri-food products; identifies positive and negative trends under the pandemic; based on the revealed problemssubstantiates the need formodernization ofin agri-food sector. It is argued that low production and productivity rates of food products negatively impact achieving the optimal export-to-import ratio; therefore, it leads toincreaseddependence on other countries andreduces the level of food security. Research objectives. The objective of the research is to identify the key challenges based on the analysis of export-import indicators of Georgian food products during the pandemic period and develop recommendations on the possibilities of post-pandemic perspectives. Research methods. Various theoretical and methodological research tools are used in the paper; in particular, a desk research is carried out on the research topic; endogenous and exogenous variables affecting export and import are determined through factor analysis; SWOT and PESTEL analysis are used to identify development opportunities; selection and groupingof data, identification of similarities and differences is carried outby using analysis, synthesis, sampling, induction and other methods; a qualitative study is conducted based on a survey of agri-food experts and exporters for clarifying the factors that impede export-import flows. Contributions. The factors that impede the export of Georgian agri-food products in the short run under COVID-19 pandemic are identified. These are: reduced income of farmers, delays in the supply of raw materials and supplies to the agri-food sectorfrom the neighboring industries, as well as in harvesting, processing, marketing, transportation, and other sectors; increased indirect costs, etc. The factors that impede the export in the long run areas follows loss of public confidence in the industry, risk of losing positions in traditional markets, etc. Conclusions are made on the problems in the field of export and import of Georgian agri-food products in terms of the pandemic; development opportunities are evaluated based on the analysis of the agri-food sector potential. Recommendations on the development opportunities for export and import of Georgian agri-food products in the post-pandemic period are proposed.

Keywords: agri-food products, export, and import, pandemic period, hindering factor, development potential

Procedia PDF Downloads 120
489 Nuclear Resistance Movements: Case Study of India

Authors: Shivani Yadav

Abstract:

The paper illustrates dynamics of nuclear resistance movements in India and how peoples’ power rises in response to subversion of justice and suppression of human rights. The need for democratizing nuclear policy runs implicit through the demands of the people protesting against nuclear programmes. The paper analyses the rationale behind developing nuclear energy according to the mainstream development model adopted by the state. Whether the prevalent nuclear discourse includes people’s ambitions and addresses local concerns or not is discussed. Primarily, the nuclear movements across India comprise of two types of actors i.e. the local population as well as the urban interlocutors. The first type of actor is the local population comprising of the people who are residing in the vicinity of the nuclear site and are affected by its construction, presence and operation. They have very immediate concerns against nuclear energy projects but also have an ideological stand against producing nuclear energy. The other types of actors are the urban interlocutors, who are the intellectuals and nuclear activists who have a principled stand against nuclear energy and help to aggregate the aims and goals of the movement on various platforms. The paper focuses on the nuclear resistance movements at five sites in India- Koodankulam (Tamil Nadu), Jaitapur (Maharashtra), Haripur (West Bengal), Mithivirdi (Gujrat) and Gorakhpur (Haryana). The origin, development, role of major actors and mass media coverage of all these movements are discussed in depth. Major observations from the Indian case include: first, nuclear policy discussions in India are confined to elite circles; secondly, concepts like national security and national interest are used to suppress dissent against mainstream policies; and thirdly, India’s energy policies focus on economic concerns while ignoring the human implications of such policies. In conclusion, the paper observes that the anti-nuclear movements question not just the feasibility of nuclear power but also its exclusionary nature when it comes to people’s participation in policy making, endangering the ecology, violation of human rights, etc. The character of these protests is non-violent with an aim to produce more inclusive policy debates and democratic dialogues.

Keywords: anti-nuclear movements, Koodankulam nuclear power plant, non-violent resistance, nuclear resistance movements, social movements

Procedia PDF Downloads 120
488 Prolactin and Its Abnormalities: Its Implications on the Male Reproductive Tract and Male Factor Infertility

Authors: Rizvi Hasan

Abstract:

Male factor infertility due to abnormalities in prolactin levels is encountered in a significant proportion. This was a case-control study carried out to determine the effects of prolactin abnormalities in normal males with infertility, recruiting 297 male infertile patients with informed written consent. All underwent a Basic Seminal Fluid Analysis (BSA) and endocrine profiles of FSH, LH, testosterone and prolactin (PRL) hormones using the random access chemiluminescent immunoassay method (normal range 2.5-17ng/ml). Age, weight, and height matched voluntary controls were recruited for comparison. None of the cases had anatomical, medical or surgical disorders related to infertility. Among the controls; mean age 33.2yrs ± 5.2, BMI 21.04 ± 1.39kgm-2, BSA 34×106, a number of children fathered 2±1, PRL 6.78 ± 2.92ng/ml. Of the 297 patients, 28 were hyperprolactinaemic while one was hypoprolactinaemic. All the hyperprolactinaemic patients had oligoasthenospermia, abnormal morphology and decreased viability. The serum testosterone levels were markedly lowered in 26 (92.86%) of the hyperprolactinaemic subjects. In the other 2 hyperprolactinaemic subjects and the single hypoprolactinaemic subject, the serum testosterone levels were normal. FSH and LH were normal in all patients. The 29 male patients with abnormalities in their serum PRL profiles were followed up for 12 months. The 28 patients suffering from hyperprolactinaemia were treated with oral bromocriptine in a dose of 2.5 mg twice daily. The hypoprolactinaemic patient defaulted treatment. From the follow-up, it was evident that 19 (67.86%) of the treated patients responded after 3 months of therapy while 4 (14.29%) showed improvement after approximately 6 months of bromocriptine therapy. One patient responded after 1 year of therapy while 2 patients showed improvements although not up to normal levels within the same period. Response to treatment was assessed by improvement in their BSA parameters. Prolactin abnormalities affect the male reproductive system and semen parameters necessitating further studies to ascertain the exact role of prolactin on the male reproductive tract. A parallel study was carried out incorporating 200 male white rats that were grouped and subjected to variations in their serum PRL levels. At the end of 100 days of treatment, these rats were subjected to morphological studies of their male reproductive tracts.Varying morphological changes depending on the levels of PRL changes induced were evident. Notable changes were arrest of spermatogenesis at the spermatid stage, a reduced testicular cellularity, a reduction in microvilli of the pseudostratified epithelial lining of the epididymis, while measurement of the tubular diameter showed a 30% reduction compared to normal tissue. There were no changes in the vas deferens, seminal vesicles, and the prostate. It is evident that both hyperprolactinaemia and hypoprolactinaemia have a direct effect on the morphology and function of the male reproductive tract. The morphological studies carried out on the groups of rats who were subjected to variations in their PRL levels could be the basis for infertility in male human beings.

Keywords: male factor infertility, morphological studies, prolactin, seminal fluid analysis

Procedia PDF Downloads 330
487 Interfacial Instability and Mixing Behavior between Two Liquid Layers Bounded in Finite Volumes

Authors: Lei Li, Ming M. Chai, Xiao X. Lu, Jia W. Wang

Abstract:

The mixing process of two liquid layers in a cylindrical container includes the upper liquid with higher density rushing into the lower liquid with lighter density, the lower liquid rising into the upper liquid, meanwhile the two liquid layers having interactions with each other, forming vortices, spreading or dispersing in others, entraining or mixing with others. It is a complex process constituted of flow instability, turbulent mixing and other multiscale physical phenomena and having a fast evolution velocity. In order to explore the mechanism of the process and make further investigations, some experiments about the interfacial instability and mixing behavior between two liquid layers bounded in different volumes are carried out, applying the planar laser induced fluorescence (PLIF) and the high speed camera (HSC) techniques. According to the results, the evolution of interfacial instability between immiscible liquid develops faster than theoretical rate given by the Rayleigh-Taylor Instability (RTI) theory. It is reasonable to conjecture that some mechanisms except the RTI play key roles in the mixture process of two liquid layers. From the results, it is shown that the invading velocity of the upper liquid into the lower liquid does not depend on the upper liquid's volume (height). Comparing to the cases that the upper and lower containers are of identical diameter, in the case that the lower liquid volume increases to larger geometric space, the upper liquid spreads and expands into the lower liquid more quickly during the evolution of interfacial instability, indicating that the container wall has important influence on the mixing process. In the experiments of miscible liquid layers’ mixing, the diffusion time and pattern of the liquid interfacial mixing also does not depend on the upper liquid's volumes, and when the lower liquid volume increases to larger geometric space, the action of the bounded wall on the liquid falling and rising flow will decrease, and the liquid interfacial mixing effects will also attenuate. Therefore, it is also concluded that the volume weight of upper heavier liquid is not the reason of the fast interfacial instability evolution between the two liquid layers and the bounded wall action is limited to the unstable and mixing flow. The numerical simulations of the immiscible liquid layers’ interfacial instability flow using the VOF method show the typical flow pattern agree with the experiments. However the calculated instability development is much slower than the experimental measurement. The numerical simulation of the miscible liquids’ mixing, which applying Fick’s diffusion law to the components’ transport equation, shows a much faster mixing rate than the experiments on the liquids’ interface at the initial stage. It can be presumed that the interfacial tension plays an important role in the interfacial instability between the two liquid layers bounded in finite volume.

Keywords: interfacial instability and mixing, two liquid layers, Planar Laser Induced Fluorescence (PLIF), High Speed Camera (HSC), interfacial energy and tension, Cahn-Hilliard Navier-Stokes (CHNS) equations

Procedia PDF Downloads 223
486 Biodiesel Production from Edible Oil Wastewater Sludge with Bioethanol Using Nano-Magnetic Catalysis

Authors: Wighens Ngoie Ilunga, Pamela J. Welz, Olewaseun O. Oyekola, Daniel Ikhu-Omoregbe

Abstract:

Currently, most sludge from the wastewater treatment plants of edible oil factories is disposed to landfills, but landfill sites are finite and potential sources of environmental pollution. Production of biodiesel from wastewater sludge can contribute to energy production and waste minimization. However, conventional biodiesel production is energy and waste intensive. Generally, biodiesel is produced from the transesterification reaction of oils with alcohol (i.e., Methanol, ethanol) in the presence of a catalyst. Homogeneously catalysed transesterification is the conventional approach for large-scale production of biodiesel as reaction times are relatively short. Nevertheless, homogenous catalysis presents several challenges such as high probability of soap. The current study aimed to reuse wastewater sludge from the edible oil industry as a novel feedstock for both monounsaturated fats and bioethanol for the production of biodiesel. Preliminary results have shown that the fatty acid profile of the oilseed wastewater sludge is favourable for biodiesel production with 48% (w/w) monounsaturated fats and that the residue left after the extraction of fats from the sludge contains sufficient fermentable sugars after steam explosion followed by an enzymatic hydrolysis for the successful production of bioethanol [29% (w/w)] using a commercial strain of Saccharomyces cerevisiae. A novel nano-magnetic catalyst was synthesised from mineral processing alkaline tailings, mainly containing dolomite originating from cupriferous ores using a modified sol-gel. The catalyst elemental chemical compositions and structural properties were characterised by X-ray diffraction (XRD), scanning electron microscopy (SEM), Fourier transform infra-red (FTIR) and the BET for the surface area with 14.3 m²/g and 34.1 nm average pore diameter. The mass magnetization of the nano-magnetic catalyst was 170 emu/g. Both the catalytic properties and reusability of the catalyst were investigated. A maximum biodiesel yield of 78% was obtained, which dropped to 52% after the fourth transesterification reaction cycle. The proposed approach has the potential to reduce material costs, energy consumption and water usage associated with conventional biodiesel production technologies. It may also mitigate the impact of conventional biodiesel production on food and land security, while simultaneously reducing waste.

Keywords: biodiesel, bioethanol, edible oil wastewater sludge, nano-magnetism

Procedia PDF Downloads 127
485 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison

Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.

Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison

Procedia PDF Downloads 139