Search results for: higher order
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22742

Search results for: higher order

2822 Talent Management in Small and Medium Sized Companies: A Multilevel Approach Contextualized in France

Authors: Kousay Abid

Abstract:

The aim of this paper is to better understand talent and talent management (TM) in small French companies as well as in medium-sized ones (SME). While previous empirical investigations have largely focused on multinationals and big companies and concentrated on the Anglo-Saxon context, we focus on the pressing need for implementing TM strategies and practices, not only on a new ground of SME but also within a new European context related to France and the French context. This study also aims at understanding strategies adopted by those firms as means to attract, retain, maintain and to develop talents. We contribute to TM issues by adopting a multilevel approach, holding the goal of reaching a global holistic vision of interactions between various levels while applying TM, to make it more and more familiar to us. A qualitative research methodology based on a multiple-case study design, bottomed firstly on a qualitative survey and secondly on two in-depth case study, both built on interviews, will be used in order to develop an ideal analysis for TM strategies and practices. The findings will be based on data collected from more than 15 French SMEs. Our theoretical contributions are the fruit of context considerations and the dynamic of multilevel approach. Theoretically, we attempt first to clarify how talents and TM are seen and defined in French SMEs and consequently to enrich the literature on TM in SMEs out of the Anglo-Saxon context. Moreover, we seek to understand how SMEs manage jointly their talents and their TM strategies by setting up this contextualized pilot study. As well, we focus on the systematic TM model issue from French SMEs. Our prior managerial goal is to shed light on the need for TM to achieve a better management of these organizations by directing leaders to better identify the talented people whom they hold at all levels. In addition, our TM systematic model strengthens our analysis grid as recommendations for CEO and Human Resource Development (HRD) to make them rethink about the companies’ HR business strategies. Therefore, our outputs present a multiple lever of action that should be taken into consideration while reviewing HR strategies and systems, as well as their impact beyond organizational boundaries.

Keywords: french context, multilevel approach, small and medium-sized enterprises, talent management

Procedia PDF Downloads 170
2821 Comparative Analysis of the Expansion Rate and Soil Erodibility Factor (K) of Some Gullies in Nnewi and Nnobi, Anambra State Southeastern Nigeria

Authors: Nzereogu Stella Kosi, Igwe Ogbonnaya, Emeh Chukwuebuka Odinaka

Abstract:

A comparative analysis of the expansion rate and soil erodibility of some gullies in Nnewi and Nnobi both of Nanka Formation were studied. The study involved an integration of field observations, geotechnical analysis, slope stability analysis, multivariate statistical analysis, gully expansion rate analysis, and determination of the soil erodibility factor (K) from Revised Universal Soil Loss Equation (RUSLE). Fifteen representative gullies were studied extensively, and results reveal that the geotechnical properties of the soil, topography, vegetation cover, rainfall intensity, and the anthropogenic activities in the study area were major factors propagating and influencing the erodibility of the soils. The specific gravity of the soils ranged from 2.45-2.66 and 2.54-2.78 for Nnewi and Nnobi, respectively. Grain size distribution analysis revealed that the soils are composed of gravel (5.77-17.67%), sand (79.90-91.01%), and fines (2.36-4.05%) for Nnewi and gravel (7.01-13.65%), sand (82.47-88.67%), and fines (3.78-5.02%) for Nnobi. The soils are moderately permeable with values ranging from 2.92 x 10-5 - 6.80 x 10-4 m/sec and 2.35 x 10-6 - 3.84 x 10⁻⁴m/sec for Nnewi and Nnobi respectively. All have low cohesion values ranging from 1–5kPa and 2-5kPa and internal friction angle ranging from 29-38° and 30-34° for Nnewi and Nnobi, respectively, which suggests that the soils have low shear strength and are susceptible to shear failure. Furthermore, the compaction test revealed that the soils were loose and easily erodible with values of maximum dry density (MDD) and optimum moisture content (OMC) ranging from 1.82-2.11g/cm³ and 8.20-17.81% for Nnewi and 1.98-2.13g/cm³ and 6.00-17.80% respectively. The plasticity index (PI) of the fines showed that they are nonplastic to low plastic soils and highly liquefiable with values ranging from 0-10% and 0-9% for Nnewi and Nnobi, respectively. Multivariate statistical analyses were used to establish relationship among the determined parameters. Slope stability analysis gave factor of safety (FoS) values in the range of 0.50-0.76 and 0.82-0.95 for saturated condition and 0.73-0.98 and 0.87-1.04 for unsaturated condition for both Nnewi and Nnobi, respectively indicating that the slopes are generally unstable to critically stable. The erosion expansion rate analysis for a fifteen-year period (2005-2020) revealed an average longitudinal expansion rate of 36.05m/yr, 10.76m/yr, and 183m/yr for Nnewi, Nnobi, and Nanka type gullies, respectively. The soil erodibility factor (K) are 8.57x10⁻² and 1.62x10-4 for Nnewi and Nnobi, respectively, indicating that the soils in Nnewi have higher erodibility potentials than those of Nnobi. From the study, both the Nnewi and Nnobi areas are highly prone to erosion. However, based on the relatively lower fine content of the soil, relatively lower topography, steeper slope angle, and sparsely vegetated terrain in Nnewi, soil erodibility and gully intensity are more profound in Nnewi than Nnobi.

Keywords: soil erodibility, gully expansion, nnewi-nnobi, slope stability, factor of safety

Procedia PDF Downloads 114
2820 Evaluation of Rhizobia for Nodulation, Shoot and Root Biomass from Host Range Studies Using Soybean, Common Bean, Bambara Groundnut and Mung Bean

Authors: Sharon K. Mahlangu, Mustapha Mohammed, Felix D. Dakora

Abstract:

Rural households in Africa depend largely on legumes as a source of high-protein food due to N₂-fixation by rhizobia when they infect plant roots. However, the legume/rhizobia symbiosis can exhibit some level of specificity such that some legumes may be selectively nodulated by only a particular group of rhizobia. In contrast, some legumes are highly promiscuous and are nodulated by a wide range of rhizobia. Little is known about the nodulation promiscuity of bacterial symbionts from wild legumes such as Aspalathus linearis, especially if they can nodulate cultivated grain legumes such as cowpea and Kersting’s groundnut. Determining the host range of the symbionts of wild legumes can potentially reveal novel rhizobial strains that can be used to increase nitrogen fixation in cultivated legumes. In this study, bacteria were isolated and tested for their ability to induce root nodules on their homologous hosts. Seeds were surface-sterilized with alcohol and sodium hypochlorite and planted in sterile sand contained in plastic pots. The pot surface was covered with sterile non-absorbent cotton wool to avoid contamination. The plants were watered with nitrogen-free nutrient solution and sterile water in alternation. Three replicate pots were used per isolate. The plants were grown for 90 days in a naturally-lit glasshouse and assessed for nodulation (nodule number and nodule biomass) and shoot biomass. Seven isolates from each of Kersting’s groundnut and cowpea and two from Rooibos tea plants were tested for their ability to nodulate soybean, mung bean, common bean and Bambara groundnut. The results showed that of the isolates from cowpea, where VUSA55 and VUSA42 could nodulate all test host plants, followed by VUSA48 which nodulated cowpea, Bambara groundnut and soybean. The two isolates from Rooibos tea plants nodulated Bambara groundnut, soybean and common bean. However, isolate L1R3.3.1 also nodulated mung bean. There was a greater accumulation of shoot biomass when cowpea isolate VUSA55 nodulated common bean. Isolate VUSA55 produced the highest shoot biomass, followed by VUSA42 and VUSA48. The two Kersting’s groundnut isolates, MGSA131 and MGSA110, accumulated average shoot biomass. In contrast, the two Rooibos tea isolates induced a higher accumulation of biomass in Bambara groundnut, followed by common bean. The results suggest that inoculating these agriculturally important grain legumes with cowpea isolates can contribute to improved soil fertility, especially soil nitrogen levels.

Keywords: legumes, nitrogen fixation, nodulation, rhizobia

Procedia PDF Downloads 205
2819 Application of Nanoparticles on Surface of Commercial Carbon-Based Adsorbent for Removal of Contaminants from Water

Authors: Ahmad Kayvani Fard, Gordon Mckay, Muataz Hussien

Abstract:

Adsorption/sorption is believed to be one of the optimal processes for the removal of heavy metals from water due to its low operational and capital cost as well as its high removal efficiency. Different materials have been reported in literature as adsorbent for heavy metal removal in waste water such as natural sorbents, organic polymers (synthetic) and mineral materials (inorganic). The selection of adsorbents and development of new functional materials that can achieve good removal of heavy metals from water is an important practice and depends on many factors, such as the availability of the material, cost of material, and material safety and etc. In this study we reported the synthesis of doped Activated carbon and Carbon nanotube (CNT) with different loading of metal oxide nanoparticles such as Fe2O3, Fe3O4, Al2O3, TiO2, SiO2 and Ag nanoparticles and their application in removal of heavy metals, hydrocarbon, and organics from waste water. Commercial AC and CNT with different loadings of mentioned nanoparticle were prepared and effect of pH, adsorbent dosage, sorption kinetic, and concentration effects are studied and optimum condition for removal of heavy metals from water is reported. The prepared composite sorbent is characterized using field emission scanning electron microscopy (FE-SEM), high transmission electron microscopy (HR-TEM), thermogravimetric analysis (TGA), X-ray diffractometer (XRD), the Brunauer, Emmett and Teller (BET) nitrogen adsorption technique, and Zeta potential. The composite materials showed higher removal efficiency and superior adsorption capacity compared to commercially available carbon based adsorbent. The specific surface area of AC increased by 50% reaching up to 2000 m2/g while the CNT specific surface area of CNT increased by more than 8 times reaching value of 890 m2/g. The increased surface area is one of the key parameters along with surface charge of the material determining the removal efficiency and removal efficiency. Moreover, the surface charge density of the impregnated CNT and AC have enhanced significantly where can benefit the adsorption process. The nanoparticles also enhance the catalytic activity of material and reduce the agglomeration and aggregation of material which provides more active site for adsorbing the contaminant from water. Some of the results for treating wastewater includes 100% removal of BTEX, arsenic, strontium, barium, phenolic compounds, and oil from water. The results obtained are promising for the use of AC and CNT loaded with metal oxide nanoparticle in treatment and pretreatment of waste water and produced water before desalination process. Adsorption can be very efficient with low energy consumption and economic feasibility.

Keywords: carbon nanotube, activated carbon, adsorption, heavy metal, water treatment

Procedia PDF Downloads 222
2818 A Joinpoint Regression Analysis of Trends in Tuberculosis Notifications in Two Urban Regions in Namibia

Authors: Anna M. N. Shifotoka, Richard Walker, Katie Haighton, Richard McNally

Abstract:

An analysis of trends in Case Notification Rates (CNR) can be used to monitor the impact of Tuberculosis (TB) control interventions over time in order to inform the implementation of current and future TB interventions. A retrospective analysis of trends in TB CNR for two urban regions in Namibia, namely Khomas and Erongo regions, was conducted. TB case notification data were obtained from annual TB reports of the national TB programme, Ministry of Health and Social Services, covering the period from 1997 to 2015. Joinpoint regression was used to analyse trends in CNR for different types of TB groups. A trend was considered to be statistically significant when a p-value was less than 0.05. During the period under review, the crude CNR for all forms of TB declined from 808 to 400 per 100 000 population in Khomas, and from 1051 to 611 per 100 000 population in Erongo. In both regions, significant change points in trends were observed for all types of TB groups examined. In Khomas region, the trend for new smear positive pulmonary TB increased significantly by an annual rate of 4.1% (95% Confidence Interval (CI): 0.3% to 8.2%) during the period 1997 to 2004, and thereafter declined significantly by -6.2% (95%CI: -7.7% to -4.3%) per year until 2015. Similarly, the trend for smear negative pulmonary TB increased significantly by 23.7% (95%CI: 9.7 to 39.5) per year from 1997 to 2004 and thereafter declined significantly by an annual change of -26.4% (95%CI: -33.1% to -19.8%). The trend for all forms of TB CNR in Khomas region increased significantly by 8.1% (95%CI: 3.7 to 12.7) per year from 1997 to 2004 and thereafter declined significantly a rate of -8.7% (95%CI: -10.6 to -6.8). In Erongo region, the trend for smear positive pulmonary TB increased at a rate of 1.2% (95%CI: -1.2% to 3.6%) annually during the earlier years (1997 to 2008), and thereafter declined significantly by -9.3% (95%CI: -13.3% to -5.0%) per year from 2008 to 2015. Also in Erongo, the trend for all forms of TB CNR increased significantly by an annual rate of 4.0% (95%CI: 1.4% to 6.6%) during the years between 1997 to 2006 and thereafter declined significantly by -10.4% (95%CI: -12.7% to -8.0%) per year during 2006 to 2015. The trend for extra-pulmonary TB CNR declined but did not reach statistical significance in both regions. In conclusion, CNRs declined for all types of TB examined in both regions. Further research is needed to study trends for other TB dimensions such as treatment outcomes and notification of drug resistant TB cases.

Keywords: epidemiology, Namibia, temporal trends, tuberculosis

Procedia PDF Downloads 133
2817 Effect of Plant Growth Regulators on in vitro Biosynthesis of Antioxidative Compounds in Callus Culture and Regenerated Plantlets Derived from Taraxacum officinale

Authors: Neha Sahu, Awantika Singh, Brijesh Kumar, K. R. Arya

Abstract:

Taraxacum officinale Weber or dandelion (Asteraceae) is an important Indian traditional herb used to treat liver detoxification, digestive problems, spleen, hepatic and kidney disorders, etc. The plant is well known to possess important phenolic and flavonoids to serve as a potential source of antioxidative and chemoprotective agents. Biosynthesis of bioactive compounds through in vitro cultures is a requisite for natural resource conservation and to provide an alternative source for pharmaceutical applications. Thus an efficient and reproducible protocol was developed for in vitro biosynthesis of bioactive antioxidative compounds from leaf derived callus and in vitro regenerated cultures of Taraxacum officinale using MS media fortified with various combinations of auxins and cytokinins. MS media containing 0.25 mg/l 2, 4-D (2, 4-Dichloro phenoxyacetic acid) with 0.05 mg/l 2-iP [N6-(2-Isopentenyl adenine)] was found as an effective combination for the establishment of callus with 92 % callus induction frequency. Moreover, 2.5 mg/l NAA (α-Naphthalene acetic acid) with 0.5 mg/l BAP (6-Benzyl aminopurine) and 1.5 mg/l NAA showed the optimal response for in vitro plant regeneration with 80 % regeneration frequency and rooting respectively. In vitro regenerated plantlets were further transferred to soil and acclimatized. Quantitative variability of accumulated bioactive compounds in cultures (in vitro callus, plantlets and acclimatized) were determined through UPLC-MS/MS (ultra-performance liquid chromatography-triple quadrupole-linear ion trap mass spectrometry) and compared with wild plants. The phytochemical determination of in vitro and wild grown samples showed the accumulation of 6 compounds. In in vitro callus cultures and regenerated plantlets, two major antioxidative compounds i.e. chlorogenic acid (14950.0 µg/g and 4086.67 µg/g) and umbelliferone (10400.00 µg/g and 2541.67 µg/g) were found respectively. Scopoletin was found to be highest in vitro regenerated plants (83.11 µg/g) as compared to wild plants (52.75 µg/g). Notably, scopoletin is not detected in callus and acclimatized plants, but quinic acid (6433.33 µg/g) and protocatechuic acid (92.33 µg/g) were accumulated at the highest level in acclimatized plants as compared to other samples. Wild grown plants contained highest content (948.33 µg/g) of flavonoid glycoside i.e. luteolin-7-O-glucoside. Our data suggests that in vitro callus and regenerated plants biosynthesized higher content of antioxidative compounds in controlled conditions when compared to wild grown plants. These standardized cultural conditions may be explored as a sustainable source of plant materials for enhanced production and adequate supply of oxidative polyphenols.

Keywords: anti-oxidative compounds, in vitro cultures, Taraxacum officinale, UPLC-MS/MS

Procedia PDF Downloads 194
2816 Marine Environmental Monitoring Using an Open Source Autonomous Marine Surface Vehicle

Authors: U. Pruthviraj, Praveen Kumar R. A. K. Athul, K. V. Gangadharan, S. Rao Shrikantha

Abstract:

An open source based autonomous unmanned marine surface vehicle (UMSV) is developed for some of the marine applications such as pollution control, environmental monitoring and thermal imaging. A double rotomoulded hull boat is deployed which is rugged, tough, quick to deploy and moves faster. It is suitable for environmental monitoring, and it is designed for easy maintenance. A 2HP electric outboard marine motor is used which is powered by a lithium-ion battery and can also be charged from a solar charger. All connections are completely waterproof to IP67 ratings. In full throttle speed, the marine motor is capable of up to 7 kmph. The motor is integrated with an open source based controller using cortex M4F for adjusting the direction of the motor. This UMSV can be operated by three modes: semi-autonomous, manual and fully automated. One of the channels of a 2.4GHz radio link 8 channel transmitter is used for toggling between different modes of the USMV. In this electric outboard marine motor an on board GPS system has been fitted to find the range and GPS positioning. The entire system can be assembled in the field in less than 10 minutes. A Flir Lepton thermal camera core, is integrated with a 64-bit quad-core Linux based open source processor, facilitating real-time capturing of thermal images and the results are stored in a micro SD card which is a data storage device for the system. The thermal camera is interfaced to an open source processor through SPI protocol. These thermal images are used for finding oil spills and to look for people who are drowning at low visibility during the night time. A Real Time clock (RTC) module is attached with the battery to provide the date and time of thermal images captured. For the live video feed, a 900MHz long range video transmitter and receiver is setup by which from a higher power output a longer range of 40miles has been achieved. A Multi-parameter probe is used to measure the following parameters: conductivity, salinity, resistivity, density, dissolved oxygen content, ORP (Oxidation-Reduction Potential), pH level, temperature, water level and pressure (absolute).The maximum pressure it can withstand 160 psi, up to 100m. This work represents a field demonstration of an open source based autonomous navigation system for a marine surface vehicle.

Keywords: open source, autonomous navigation, environmental monitoring, UMSV, outboard motor, multi-parameter probe

Procedia PDF Downloads 226
2815 An Assessment of Involuntary Migration in India: Understanding Issues and Challenges

Authors: Rajni Singh, Rakesh Mishra, Mukunda Upadhyay

Abstract:

India is among the nations born out of partition that led to one of the greatest forced migrations that marked the past century. The Indian subcontinent got partitioned into two nation-states, namely India and Pakistan. This led to an unexampled mass displacement of people accounting for about 20 million in the subcontinent as a whole. This exemplifies the socio-political version of displacement, but there are other identified reasons leading to human displacement viz., natural calamities, development projects and people-trafficking and smuggling. Although forced migrations are rare in incidence, they are mostly region-specific and a very less percentage of population appears to be affected by it. However, when this percentage is transcripted in terms of volume, the real impact created by such migration can be realized. Forced migration is thus an issue related to the lives of many people and requires to be addressed with proper intervention. Forced or involuntary migration decimates peoples' assets while taking from them their most basic resources and makes them migrate without planning and intention. This in most cases proves to be a burden on the destination resources. Thus, the question related to their security concerns arise profoundly with regard to the protection and safeguards to these migrants who need help at the place of destination. This brings the human security dimension of forced migration into picture. The present study is an analysis of a sample of 1501 persons by NSSO in India (National Sample Survey Organisation), which identifies three reasons for forced migration- natural disaster, social/political problem and displacement by development projects. It was observed that, of the total forced migrants, about 4/5th comprised of the internally displaced persons. However, there was a huge inflow of such migrants to the country from across the borders also, the major contributing countries being Bangladesh, Pakistan, Sri Lanka, Gulf countries and Nepal. Among the three reasons for involuntary migration, social and political problem is the most prominent in displacing huge masses of population; it is also the reason where the share of international migrants to that of internally displaced is higher compared to the other two factors /reasons. Second to political and social problems, natural calamities displaced a high portion of the involuntary migrants. The present paper examines the factors which increase people's vulnerability to forced migration. On perusing the background characteristics of the migrants it was seen that those who were economically weak and socially fragile are more susceptible to migration. Therefore, getting an insight about this fragile group of society is required so that government policies can benefit these in the most efficient and targeted manner.

Keywords: involuntary migration, displacement, natural disaster, social and political problem

Procedia PDF Downloads 340
2814 Revealing Single Crystal Quality by Insight Diffraction Imaging Technique

Authors: Thu Nhi Tran Caliste

Abstract:

X-ray Bragg diffraction imaging (“topography”)entered into practical use when Lang designed an “easy” technical setup to characterise the defects / distortions in the high perfection crystals produced for the microelectronics industry. The use of this technique extended to all kind of high quality crystals, and deposited layers, and a series of publications explained, starting from the dynamical theory of diffraction, the contrast of the images of the defects. A quantitative version of “monochromatic topography” known as“Rocking Curve Imaging” (RCI) was implemented, by using synchrotron light and taking advantage of the dramatic improvement of the 2D-detectors and computerised image processing. The rough data is constituted by a number (~300) of images recorded along the diffraction (“rocking”) curve. If the quality of the crystal is such that a one-to-onerelation between a pixel of the detector and a voxel within the crystal can be established (this approximation is very well fulfilled if the local mosaic spread of the voxel is < 1 mradian), a software we developped provides, from the each rocking curve recorded on each of the pixels of the detector, not only the “voxel” integrated intensity (the only data provided by the previous techniques) but also its “mosaic spread” (FWHM) and peak position. We will show, based on many examples, that this new data, never recorded before, open the field to a highly enhanced characterization of the crystal and deposited layers. These examples include the characterization of dislocations and twins occurring during silicon growth, various growth features in Al203, GaNand CdTe (where the diffraction displays the Borrmannanomalous absorption, which leads to a new type of images), and the characterisation of the defects within deposited layers, or their effect on the substrate. We could also observe (due to the very high sensitivity of the setup installed on BM05, which allows revealing these faint effects) that, when dealing with very perfect crystals, the Kato’s interference fringes predicted by dynamical theory are also associated with very small modifications of the local FWHM and peak position (of the order of the µradian). This rather unexpected (at least for us) result appears to be in keeping with preliminary dynamical theory calculations.

Keywords: rocking curve imaging, X-ray diffraction, defect, distortion

Procedia PDF Downloads 119
2813 Dual-Layer Microporous Layer of Gas Diffusion Layer for Proton Exchange Membrane Fuel Cells under Various RH Conditions

Authors: Grigoria Athanasaki, Veerarajan Vimala, A. M. Kannan, Louis Cindrella

Abstract:

Energy usage has been increased throughout the years, leading to severe environmental impacts. Since the majority of the energy is currently produced from fossil fuels, there is a global need for clean energy solutions. Proton Exchange Membrane Fuel Cells (PEMFCs) offer a very promising solution for transportation applications because of their solid configuration and low temperature operations, which allows them to start quickly. One of the main components of PEMFCs is the Gas Diffusion Layer (GDL), which manages water and gas transport and shows direct influence on the fuel cell performance. In this work, a novel dual-layer GDL with gradient porosity was prepared, using polyethylene glycol (PEG) as pore former, to improve the gas diffusion and water management in the system. The microporous layer (MPL) of the fabricated GDL consists of carbon powder PUREBLACK, sodium dodecyl sulfate as a surfactant, 34% wt. PTFE and the gradient porosity was created by applying one layer using 30% wt. PEG on the carbon substrate, followed by a second layer without using any pore former. The total carbon loading of the microporous layer is ~ 3 mg.cm-2. For the assembly of the catalyst layer, Nafion membrane (Ion Power, Nafion Membrane NR211) and Pt/C electrocatalyst (46.1% wt.) were used. The catalyst ink was deposited on the membrane via microspraying technique. The Pt loading is ~ 0.4 mg.cm-2, and the active area is 5 cm2. The sample was ex-situ characterized via wetting angle measurement, Scanning Electron Microscopy (SEM), and Pore Size Distribution (PSD) to evaluate its characteristics. Furthermore, for the performance evaluation in-situ characterization via Fuel Cell Testing using H2/O2 and H2/air as reactants, under 50, 60, 80, and 100% relative humidity (RH), took place. The results were compared to a single layer GDL, fabricated with the same carbon powder and loading as the dual layer GDL, and a commercially available GDL with MPL (AvCarb2120). The findings reveal high hydrophobic properties of the microporous layer of the GDL for both PUREBLACK based samples, while the commercial GDL demonstrates hydrophilic behavior. The dual layer GDL shows high and stable fuel cell performance under all the RH conditions, whereas the single layer manifests a drop in performance at high RH in both oxygen and air, caused by catalyst flooding. The commercial GDL shows very low and unstable performance, possibly because of its hydrophilic character and thinner microporous layer. In conclusion, the dual layer GDL with PEG appears to have improved gas diffusion and water management in the fuel cell system. Due to its increasing porosity from the catalyst layer to the carbon substrate, it allows easier access of the reactant gases from the flow channels to the catalyst layer, and more efficient water removal from the catalyst layer, leading to higher performance and stability.

Keywords: gas diffusion layer, microporous layer, proton exchange membrane fuel cells, relative humidity

Procedia PDF Downloads 113
2812 Research of Stalled Operational Modes of Axial-Flow Compressor for Diagnostics of Pre-Surge State

Authors: F. Mohammadsadeghi

Abstract:

Relevance of research: Axial compressors are used in both aircraft engine construction and ground-based gas turbine engines. The compressor is considered to be one of the main gas turbine engine units, which define absolute and relative indicators of engine in general. Failure of compressor often leads to drastic consequences. Therefore, safe (stable) operation must be maintained when using axial compressor. Currently, we can observe a tendency of increase of power unit, productivity, circumferential velocity and compression ratio of axial compressors in gas turbine engines of aircraft and ground-based application whereas metal consumption of their structure tends to fall. This causes the increase of dynamic loads as well as danger of damage of high load compressor or engine structure elements in general due to transient processes. In operating practices of aeronautical engineering and ground units with gas turbine drive the operational stability failure of gas turbine engines is one of relatively often failure causes what can lead to emergency situations. Surge occurrence is considered to be an absolute buckling failure. This is one of the most dangerous and often occurring types of instability. However detailed were the researches of this phenomenon the development of measures for surge before-the-fact prevention is still relevant. This is why the research of transient processes for axial compressors is necessary in order to provide efficient, stable and secure operation. The paper addresses the problem of automatic control system improvement by integrating the anti-surge algorithms for axial compressor of aircraft gas turbine engine. Paper considers dynamic exhaustion of gas dynamic stability of compressor stage, results of numerical simulation of airflow flowing through the airfoil at design and stalling modes, experimental researches to form the criteria that identify the compressor state at pre-surge mode detection. Authors formulated basic ways for developing surge preventing systems, i.e. forming the algorithms that allow detecting the surge origination and the systems that implement the proposed algorithms.

Keywords: axial compressor, rotation stall, Surg, unstable operation of gas turbine engine

Procedia PDF Downloads 396
2811 Perceptions and Expectations by Participants of Monitoring and Evaluation Short Course Training Programmes in Africa

Authors: Mokgophana Ramasobana

Abstract:

Background: At the core of the demand to utilize evidence-based approaches in the policy-making cycle, prioritization of limited financial resources and results driven initiatives is the urgency to develop a cohort of competent Monitoring and Evaluation (M&E) practitioners and public servants. The ongoing strides in the evaluation capacity building (ECB) initiatives are a direct response to produce the highly-sought after M&E skills. Notwithstanding the rapid growth of M&E short courses, participants perceived value and expectation of M&E short courses as a panacea for ECB have not been empirically quantified or measured. The objective of this article is to explicitly illustrate the importance of measuring ECB interventions and understanding what works in ECB and why it works. Objectives: This article illustrates the importance of establishing empirical ECB measurement tools to evaluate ECB interventions in order to ascertain its contribution to the broader evaluation practice. Method: The study was primarily a desktop review of existing literature, juxtaposed by a survey of the participants across the African continent based on the 43 M&E short courses hosted by the Centre for Learning on Evaluation and Results Anglophone Africa (CLEAR-AA) in collaboration with the Department of Planning Monitoring and Evaluation (DPME) Results: The article established that participants perceive short course training as a panacea to improve their M&E practical skill critical to executing their organizational duties. In tandem, participants are likely to demand customized training as opposed to general topics in Evaluation. However, the organizational environments constrain the application of the newly acquired skills. Conclusion: This article aims to contribute to the 'how to' measure ECB interventions discourse and contribute towards the improvement to evaluate ECB interventions. The study finds that participants prefer training courses with longer duration to cover more topics. At the same time, whilst organizations call for customization of programmes, the study found that individual participants demand knowledge of generic and popular evaluation topics.

Keywords: evaluation capacity building, effectiveness and training, monitoring and evaluation (M&E) short course training, perceptions and expectations

Procedia PDF Downloads 119
2810 Monetary Policy and Assets Prices in Nigeria: Testing for the Direction of Relationship

Authors: Jameelah Omolara Yaqub

Abstract:

One of the main reasons for the existence of central bank is that it is believed that central banks have some influence on private sector decisions which will enable the Central Bank to achieve some of its objectives especially that of stable price and economic growth. By the assumption of the New Keynesian theory that prices are fully flexible in the short run, the central bank can temporarily influence real interest rate and, therefore, have an effect on real output in addition to nominal prices. There is, therefore, the need for the Central Bank to monitor, respond to, and influence private sector decisions appropriately. This thus shows that the Central Bank and the private sector will both affect and be affected by each other implying considerable interdependence between the sectors. The interdependence may be simultaneous or not depending on the level of information, readily available and how sensitive prices are to agents’ expectations about the future. The aim of this paper is, therefore, to determine whether the interdependence between asset prices and monetary policy are simultaneous or not and how important is this relationship. Studies on the effects of monetary policy have largely used VAR models to identify the interdependence but most have found small effects of interaction. Some earlier studies have ignored the possibility of simultaneous interdependence while those that have allowed for simultaneous interdependence used data from developed economies only. This study, therefore, extends the literature by using data from a developing economy where information might not be readily available to influence agents’ expectation. In this study, the direction of relationship among variables of interest will be tested by carrying out the Granger causality test. Thereafter, the interaction between asset prices and monetary policy in Nigeria will be tested. Asset prices will be represented by the NSE index as well as real estate prices while monetary policy will be represented by money supply and the MPR respectively. The VAR model will be used to analyse the relationship between the variables in order to take account of potential simultaneity of interdependence. The study will cover the period between 1980 and 2014 due to data availability. It is believed that the outcome of the research will guide monetary policymakers especially the CBN to effectively influence the private sector decisions and thereby achieve its objectives of price stability and economic growth.

Keywords: asset prices, granger causality, monetary policy rate, Nigeria

Procedia PDF Downloads 205
2809 Sustainable Hydrogel Nanocomposites Based on Grafted Chitosan and Clay for Effective Adsorption of Cationic Dye

Authors: H. Ferfera-Harrar, T. Benhalima, D. Lerari

Abstract:

Contamination of water, due to the discharge of untreated industrial wastewaters into the ecosystem, has become a serious problem for many countries. In this study, bioadsorbents based on chitosan-g-poly(acrylamide) and montmorillonite (MMt) clay (CTS-g-PAAm/MMt) hydrogel nanocomposites were prepared via free‐radical grafting copolymerization and crosslinking of acrylamide monomer (AAm) onto natural polysaccharide chitosan (CTS) as backbone, in presence of various contents of MMt clay as nanofiller. Then, they were hydrolyzed to obtain highly functionalized pH‐sensitive nanomaterials with uppermost swelling properties. Their structure characterization was conducted by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) analyses. The adsorption performances of the developed nanohybrids were examined for removal of methylene blue (MB) cationic dye from aqueous solutions. The factors affecting the removal of MB, such as clay content, pH medium, adsorbent dose, initial dye concentration and temperature were explored. The adsorption process was found to be highly pH dependent. From adsorption kinetic results, the prepared adsorbents showed remarkable adsorption capacity and fast adsorption rate, mainly more than 88% of MB removal efficiency was reached after 50 min in 200 mg L-1 of dye solution. In addition, the incorporating of various content of clay has enhanced adsorption capacity of CTS-g-PAAm matrix from 1685 to a highest value of 1749 mg g-1 for the optimized nanocomposite containing 2 wt.% of MMt. The experimental kinetic data were well described by the pseudo-second-order model, while the equilibrium data were represented perfectly by Langmuir isotherm model. The maximum Langmuir equilibrium adsorption capacity (qm) was found to increase from 2173 mg g−1 until 2221 mg g−1 by adding 2 wt.% of clay nanofiller. Thermodynamic parameters revealed the spontaneous and endothermic nature of the process. In addition, the reusability study revealed that these bioadsorbents could be well regenerated with desorption efficiency overhead 87% and without any obvious decrease of removal efficiency as compared to starting ones even after four consecutive adsorption/desorption cycles, which exceeded 64%. These results suggest that the optimized nanocomposites are promising as low cost bioadsorbents.

Keywords: chitosan, clay, dye adsorption, hydrogels nanocomposites

Procedia PDF Downloads 113
2808 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 79
2807 Tuberculosis in Humans and Animals in the Eastern Part of the Sudan

Authors: Yassir Adam Shuaib, Stefan Niemann, Eltahir Awad Khalil, Ulrich Schaible, Lothar Heinz Wieler, Mohammed Ahmed Bakhiet, Abbashar Osman Mohammed, Mohamed Abdelsalam Abdalla, Elvira Richter

Abstract:

Tuberculosis (TB) is a chronic bacterial disease of humans and animals and it is characterized by the progressive development of specific granulomatous tubercle lesions in affected tissues. In a six-month study, from June to November 2014, a total of 2,304 carcasses of cattle, camel, sheep, and goats slaughtered at East and West Gaash slaughterhouses, Kassala, were investigated during postmortem, in parallel, 101 sputum samples from TB suspected patients at Kassala and El-Gadarif Teaching Hospitals were collected in order to investigate tuberculosis in animals and humans. Only 0.1% carcasses were found with suspected TB lesions in the liver and lung and peritoneal cavity of two sheep and no tuberculous lesions were found in the carcasses of cattle, goats or camels. All samples, tissue lesions and sputum, were decontaminated by the NALC-NaOH method and cultured for mycobacterial growth at the NRZ for Mycobacteria, Research Center Borstel, Germany. Genotyping and molecular characterization of the grown strains were done by line probe assay (GenoType CM and MTBC) and 16S rDNA, rpoB gene, and ITS sequencing, spoligotyping, MIRU-VNTR typing and next generation sequencing (NGS). Culture of the specimens revealed growth of organisms from 81.6% of all samples. Mycobacterium tuberculosis (76.2%), M. intracellulare (14.2%), mixed infection with M. tuberculosis and M. intracellulare (6.0%) and mixed infection with M. tuberculosis and M. fortuitum and with M. intracellulare and unknown species (1.2%) were detected in the sputum samples and unknown species (1.2%) were detected in the samples of one of the animals tissues. From the 69 M. tuberculosis strains, 25 (36.2%) were showing either mono-drug-resistant or multi-drug-resistant or poly-drug-resistant but none was extensively drug-resistant. In conclusion, the prevalence of TB in animals was very low while in humans M. tuberculosis-Delhi/CAS lineage was responsible for most cases and there was an evidence of MDR transmission and acquisition.

Keywords: animal, human, slaughterhouse, Sudan, tuberculosis

Procedia PDF Downloads 352
2806 Cytotoxic Effect of Biologically Transformed Propolis on HCT-116 Human Colon Cancer Cells

Authors: N. Selvi Gunel, L. M. Oktay, H. Memmedov, B. Durmaz, H. Kalkan Yildirim, E. Yildirim Sozmen

Abstract:

Object: Propolis which consists of compounds that are accepted as antioxidant, antimicrobial, antiseptic, antibacterial, anti-inflammatory, anti-mutagenic, immune-modulator and cytotoxic, is frequently used in current therapeutic applications. However, some of them result in allergic side effects, causing consumption to be restricted. Previously our group has succeeded in producing a new biotechnological product which was less allergenic. In this study, we purpose to optimize production conditions of this biologically-transformed propolis and determine the cytotoxic effects of obtained new products on colon cancer cell line (HCT-116). Method: Firstly, solid propolis samples were dissolved in water after weighing, grinding and sizing (sieve-35mesh) and applied 40 kHz/10 min ultrasonication. Samples were prepared according to inoculation with Lactobacillus plantarum in two different proportions (2.5% and 3.5%). Chromatographic analyzes of propolis were performed by UPLC-MS/MS (Waters, Milford, MA) system. Results were analysed by UPLC-MS/MS system MassLynx™ 4.1 software. HCT-116 cells were treated with propolis examples at 25-1000 µg/ml concentrations and cytotoxicity were measured by using WST-8 assay at 24, 48, and 72 hours. Samples with biological transformation were compared with the non-transformed control group samples. Our experiment groups were formed as follows: untreated (group 1), propolis dissolved in water ultrasonicated at 40 kHz/10 min (group 2), propolis dissolved in water ultrasonicated at 40 kHz/10 min and inoculated 2.5% L. plantarum L1 strain (group 3), propolis dissolved in water ultrasonicated at 40 kHz/10 min and inoculated 3.5% L. plantarum L3 strain (group 4). Obtained data were calculated with Graphpad Software V5 and analyzed by two-way ANOVA test followed by Bonferroni test. Result: As a result of our study, the cytotoxic effect of propolis samples on HCT-116 cells was evaluated. There was a 7.21 fold increase in group 3 compared to group 2 in the concentration of 1000 µg/ml, and it was a 6.66 fold increase in group 3 compared to group 1 at the end of 24 hours. At the end of 48 hours, in the concentration of 500 µg/ml, it was determined 4.7 fold increase in group 4 compared to group 3. At the same time, in the concentration of 750 µg/ml it was determined 2.01 fold increase in group 4 compared to group 3 and in the same concentration, it was determined 3.1 fold increase in group 4 compared to group 2. Also, at the 72 hours, in the concentration of 750 µg/ml, it was determined 2.42 fold increase in group 3 according to group 2 and in the same time, in the concentration of 1000 µg/ml, it was determined 2.13 fold increase in group 4 according to group 2. According to cytotoxicity results, the group which were ultrasonicated at 40 kHz/10min and inoculated 3.5% L. plantarum L3-strain had a higher cytotoxic effect. Conclusion: It is known that bioavailability of propolis is halved in six months. The data obtained from our results indicated that biologically-transformed propolis had more cytotoxic effect than non-transformed group on colon cancer cells. Consequently, we suggested that L. plantarum-transformation provides both reduction of allergenicity and extension of bioavailability period by enhancing healthful polyphenols.

Keywords: bio-transformation, propolis, colon cancer, cytotoxicity

Procedia PDF Downloads 125
2805 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction

Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach

Abstract:

X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.

Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast

Procedia PDF Downloads 245
2804 Turning Points in the Development of Translator Training in the West from the 1980s to the Present

Authors: B. Sayaheen

Abstract:

The translator’s competence is one of the topics that has received a great deal of research in the field of translation studies because such competencies are still debatable and not yet agreed upon. Besides, scholars tackle this topic from different points of view. Approaches to teaching these competencies have gone through some developments. This paper aims at investigating these developments, exploring the major turning points and shifts in the developments of teaching methods in translator training. The significance of these turning points and the external or internal causes will also be discussed. Based on the past and present status of teaching approaches in translator training, this paper tries to predict the future of these approaches. This paper is mainly concerned with developments of teaching approaches in the West since the 1980s to the present. The reason behind choosing this specific period is not because translator training started in the 1980s but because most criticism of the teacher-centered approach started at that time. The implications of this research stem from the fact that it identifies the turning points and the causes that led teachers to adopt student-centered approaches rather than teacher-centered approaches and then to incorporate technology and the Internet in translator training. These reasons were classified as external or internal reasons. Translation programs in the West and in other cultures can benefit from this study. Translation programs in the West can notice that teaching translation is geared toward incorporating more technologies. If these programs already use technology and the Internet to teach translation, they might benefit from the assumed future direction of teaching translation. On the other hand, some non-Western countries, and to be specific some professors, are still applying the teacher-centered approach. Moreover, these programs should include technology and the Internet in their teaching approaches to meet the drastic changes in the translation process, which seems to rely more on software and technologies to accomplish the translator’s tasks. Finally, translator training has borrowed many of its approaches from other disciplines, mainly language teaching. The teaching approaches in translator training have gone through some developments, from teacher-centered to student-centered and then toward the integration of technologies and the Internet. Both internal and external causes have played a crucial role in these developments. These borrowed approaches should be comprehensively evaluated in order to see if they achieve the goals of translator training. Such evaluation may lead us to come up with new teaching approaches developed specifically for translator training. While considering these methods and designing new approaches, we need to keep an eye on the future needs of the market.

Keywords: turning points, developments, translator training, market, The West

Procedia PDF Downloads 104
2803 Studies on Biojetfuel Obtained from Vegetable Oil: Process Characteristics, Engine Performance and Their Comparison with Mineral Jetfuel

Authors: F. Murilo T. Luna, Vanessa F. Oliveira, Alysson Rocha, Expedito J. S. Parente, Andre V. Bueno, Matheus C. M. Farias, Celio L. Cavalcante Jr.

Abstract:

Aviation jetfuel used in aircraft gas-turbine engines is customarily obtained from the kerosene distillation fraction of petroleum (150-275°C). Mineral jetfuel consists of a hydrocarbon mixture containing paraffins, naphthenes and aromatics, with low olefins content. In order to ensure their safety, several stringent requirements must be met by jetfuels, such as: high energy density, low risk of explosion, physicochemical stability and low pour point. In this context, aviation fuels eventually obtained from biofeedstocks (which have been coined as ‘biojetfuel’), must be used as ‘drop in’, since adaptations in aircraft engines are not desirable, to avoid problems with their operation reliability. Thus, potential aviation biofuels must present the same composition and physicochemical properties of conventional jetfuel. Among the potential feedtstocks for aviation biofuel, the babaçu oil, extracted from a palm tree extensively found in some regions of Brazil, contains expressive quantities of short chain saturated fatty acids and may be an interesting choice for biojetfuel production. In this study, biojetfuel was synthesized through homogeneous transesterification of babaçu oil using methanol and its properties were compared with petroleum-based jetfuel through measurements of oxidative stability, physicochemical properties and low temperature properties. The transesterification reactions were carried out using methanol and after decantation/wash procedures, the methyl esters were purified by molecular distillation under high vacuum at different temperatures. The results indicate significant improvement in oxidative stability and pour point of the products when compared to the fresh oil. After optimization of operational conditions, potential biojetfuel samples were obtained, consisting mainly of C8 esters, showing low pour point and high oxidative stability. Jet engine tests are being conducted in an automated test bed equipped with pollutant emissions analysers to study the operational performance of the biojetfuel that was obtained and compare with a mineral commercial jetfuel.

Keywords: biojetfuel, babaçu oil, oxidative stability, engine tests

Procedia PDF Downloads 249
2802 Analysis of the Production Time in a Pharmaceutical Company

Authors: Hanen Khanchel, Karim Ben Kahla

Abstract:

Pharmaceutical companies are facing competition. Indeed, the price differences between competing products can be such that it becomes difficult to compensate them by differences in value added. The conditions of competition are no longer homogeneous for the players involved. The price of a product is a given that puts a company and its customer face to face. However, price fixing obliges the company to consider internal factors relating to production costs and external factors such as customer attitudes, the existence of regulations and the structure of the market on which the firm evolved. In setting the selling price, the company must first take into account internal factors relating to its costs: costs of production fall into two categories, fixed costs and variable costs that depend on the quantities produced. The company cannot consider selling below what it costs the product. It, therefore, calculates the unit cost of production to which it adds the unit cost of distribution, enabling it to know the unit cost of production of the product. The company adds its margin and thus determines its selling price. The margin is used to remunerate the capital providers and to finance the activity of the company and its investments. Production costs are related to the quantities produced: large-scale production generally reduces the unit cost of production, which is an asset for companies with mass production markets. This shows that small and medium-sized companies with limited market segments need to make greater efforts to ensure their profit margins. As a result, and faced with high and low market prices for raw materials and increasing staff costs, the company must seek to optimize its production time in order to reduce loads and eliminate waste. Then, the customer pays only value added. Thus, and based on this principle we decided to create a project that deals with the problem of waste in our company, and having as objectives the reduction of production costs and improvement of performance indicators. This paper presents the implementation of the Value Stream Mapping (VSM) project in a pharmaceutical company. It is structured as follows: 1) determination of the family of products, 2) drawing of the current state, 3) drawing of the future state, 4) action plan and implementation.

Keywords: VSM, waste, production time, kaizen, cartography, improvement

Procedia PDF Downloads 136
2801 Preoperative Anxiety Evaluation: Comparing the Visual Facial Anxiety Scale/Yumul Faces Anxiety Scale, Numerical Verbal Rating Scale, Categorization Scale, and the State-Trait Anxiety Inventory

Authors: Roya Yumul, Chse, Ofelia Loani Elvir Lazo, David Chernobylsky, Omar Durra

Abstract:

Background: Preoperative anxiety has been shown to be caused by the fear associated with surgical and anesthetic complications; however, the current gold standard for assessing patient anxiety, the STAI, is problematic to use in the preoperative setting given the duration and concentration required to complete the 40-item extensive questionnaire. Our primary aim in the study is to investigate the correlation of the Visual Facial Anxiety Scale (VFAS) and Numerical Verbal Rating Scale (NVRS) to State-Trait Anxiety Inventory (STAI) to determine the optimal anxiety scale to use in the perioperative setting. Methods: A clinical study of patients undergoing various surgeries was conducted utilizing each of the preoperative anxiety scales. Inclusion criteria included patients undergoing elective surgeries, while exclusion criteria included patients with anesthesia contraindications, inability to comprehend instructions, impaired judgement, substance abuse history, and those pregnant or lactating. 293 patients were analyzed in terms of demographics, anxiety scale survey results, and anesthesia data via Spearman Coefficients, Chi-Squared Analysis, and Fischer’s exact test utilized for comparison analysis. Results: Statistical analysis showed that VFAS had a higher correlation to STAI than NVRS (rs=0.66, p<0.0001 vs. rs=0.64, p<0.0001). The combined VFAS-Categorization Scores showed the highest correlation with the gold standard (rs=0.72, p<0.0001). Subgroup analysis showed similar results. STAI evaluation time (247.7 ± 54.81 sec) far exceeds VFAS (7.29 ± 1.61 sec), NVRS (7.23 ± 1.60 sec), and Categorization scales (7.29 ± 1.99 sec). Patients preferred VFAS (54.4%), Categorization (11.6%), and NVRS (8.8%). Anesthesiologists preferred VFAS (63.9%), NVRS (22.1%), and Categorization Scales (14.0%). Of note, the top five causes of preoperative anxiety were determined to be waiting (56.5%), pain (42.5%), family concerns (40.5%), no information about surgery (40.1%), or anesthesia (31.6%). Conclusions: Combined VFAS-Categorization Score (VCS) demonstrates the highest correlation to the gold standard, STAI. Both VFAS and Categorization tests also take significantly less time than STAI, which is critical in the preoperative setting. Among both patients and anesthesiologists, VFAS was the most preferred scale. This forms the basis of the Yumul FACES Anxiety Scale, designed for quick quantization and assessment in the preoperative setting while maintaining a high correlation to the golden standard. Additional studies using the formulated Yumul FACES Anxiety Scale are merited.

Keywords: numerical verbal anxiety scale, preoperative anxiety, state-trait anxiety inventory, visual facial anxiety scale

Procedia PDF Downloads 126
2800 Impact of Fischer-Tropsch Wax on Ethylene Vinyl Acetate/Waste Crumb Rubber Modified Bitumen: An Energy-Sustainability Nexus

Authors: Keith D. Nare, Mohau J. Phiri, James Carson, Chris D. Woolard, Shanganyane P. Hlangothi

Abstract:

In an energy-intensive world, minimizing energy consumption is paramount to cost saving and reducing the carbon footprint. Improving mixture procedures utilizing warm mix additive Fischer-Tropsch (FT) wax in ethylene vinyl acetate (EVA) and modified bitumen highlights a greener and sustainable approach to modified bitumen. In this study, the impact of FT wax on optimized EVA/waste crumb rubber modified bitumen is assayed with a maximum loading of 2.5%. The rationale of the FT wax loading is to maintain the original maximum loading of EVA in the optimized mixture. The phase change abilities of FT wax enable EVA co-crystallization with the support of the elastomeric backbone of crumb rubber. Less than 1% loading of FT wax worked in the EVA/crumb rubber modified bitumen energy-sustainability nexus. Response surface methodology approach to the mixture design is implemented amongst the different loadings of FT wax, EVA for a consistent amount of crumb rubber and bitumen. Rheological parameters (complex shear modulus, phase angle and rutting parameter) were the factors used as performance indicators of the different optimized mixtures. The low temperature chemistry of the optimized mixtures is analyzed using elementary beam theory and the elastic-viscoelastic correspondence principle. Master curves and black space diagrams are developed and used to predict age-induced cracking of the different long term aged mixtures. Modified binder rheology reveals that the strain response is not linear and that there is substantial re-arrangement of polymer chains as stress is increased, this is based on the age state of the mixture and the FT wax and EVA loadings. Dominance of individual effects is evident over effects of synergy in co-interaction of EVA and FT wax. All-inclusive FT wax and EVA formulations were best optimized in mixture 4 with mixture 7 reflecting increase in ease of workability. Findings show that interaction chemistry of bitumen, crumb rubber EVA, and FT wax is first and second order in all cases involving individual contributions and co-interaction amongst the components of the mixture.

Keywords: bitumen, crumb rubber, ethylene vinyl acetate, FT wax

Procedia PDF Downloads 163
2799 Advancing Customer Service Management Platform: Case Study of Social Media Applications

Authors: Iseoluwa Bukunmi Kolawole, Omowunmi Precious Isreal

Abstract:

Social media has completely revolutionized the ways communication used to take place even a decade ago. It makes use of computer mediated technologies which helps in the creation of information and sharing. Social media may be defined as the production, consumption and exchange of information across platforms for social interaction. The social media has become a forum in which customer’s look for information about companies to do business with and request answers to questions about their products and services. Customer service may be termed as a process of ensuring customer’s satisfaction by meeting and exceeding their wants. In delivering excellent customer service, knowing customer’s expectations and where they are reaching out is important in meeting and exceeding customer’s want. Facebook is one of the most used social media platforms among others which also include Twitter, Instagram, Whatsapp and LinkedIn. This indicates customers are spending more time on social media platforms, therefore calls for improvement in customer service delivery over the social media pages. Millions of people channel their issues, complaints, complements and inquiries through social media. This study have being able to identify what social media customers want, their expectations and how they want to be responded to by brands and companies. However, the applied research methodology used in this paper was a mixed methods approach. The authors of d paper used qualitative method such as gathering critical views of experts on social media and customer relationship management to analyse the impacts of social media on customer's satisfaction through interviews. The authors also used quantitative such as online survey methods to address issues at different stages and to have insight about different aspects of the platforms i.e. customer’s and company’s perception about the effects of social media. Thereby exploring and gaining better understanding of how brands make use of social media as a customer relationship management tool. And an exploratory research approach strategy was applied analysing how companies need to create good customer support using social media in order to improve good customer service delivery, customer retention and referrals. Therefore many companies have preferred social media platform application as a medium of handling customer’s queries and ensuring their satisfaction, this is because social media tools are considered more transparent and effective in its operations when dealing with customer relationship management.

Keywords: brands, customer service, information, social media

Procedia PDF Downloads 252
2798 Surface Defect-engineered Ceo₂−x by Ultrasound Treatment for Superior Photocatalytic H₂ Production and Water Treatment

Authors: Nabil Al-Zaqri

Abstract:

Semiconductor photocatalysts with surface defects display incredible light absorption bandwidth, and these defects function as highly active sites for oxidation processes by interacting with the surface band structure. Accordingly, engineering the photocatalyst with surface oxygen vacancies will enhance the semiconductor nanostructure's photocatalytic efficiency. Herein, a CeO2₋ₓ nanostructure is designed under the influence of low-frequency ultrasonic waves to create surface oxygen vacancies. This approach enhances the photocatalytic efficiency compared to many heterostructures while keeping the intrinsiccrystal structure intact. Ultrasonic waves induce the acoustic cavitation effect leading to the dissemination of active elements on the surface, which results in vacancy formation in conjunction with larger surface area and smaller particle size. The structural analysis of CeO₂₋ₓ revealed higher crystallinity, as well as morphological optimization, and the presence of oxygen vacancies is verified through Raman, X-rayphotoelectron spectroscopy, temperature-programmed reduction, photoluminescence, and electron spinresonance analyses. Oxygen vacancies accelerate the redox cycle between Ce₄+ and Ce₃+ by prolongingphotogenerated charge recombination. The ultrasound-treated pristine CeO₂ sample achieved excellenthydrogen production showing a quantum efficiency of 1.125% and efficient organic degradation. Ourpromising findings demonstrated that ultrasonic treatment causes the formation of surface oxygenvacancies and improves photocatalytic hydrogen evolution and pollution degradation. Conclusion: Defect engineering of the ceria nanoparticles with oxygen vacancies was achieved for the first time using low-frequency ultrasound treatment. The U-CeO₂₋ₓsample showed high crystallinity, and morphological changes were observed. Due to the acoustic cavitation effect, a larger surface area and small particle size were observed. The ultrasound treatment causes particle aggregation and surface defects leading to oxygen vacancy formation. The XPS, Raman spectroscopy, PL spectroscopy, and ESR results confirm the presence of oxygen vacancies. The ultrasound-treated sample was also examined for pollutant degradation, where 1O₂was found to be the major active species. Hence, the ultrasound treatment influences efficient photocatalysts for superior hydrogen evolution and an excellent photocatalytic degradation of contaminants. The prepared nanostructure showed excellent stability and recyclability. This work could pave the way for a unique post-synthesis strategy intended for efficient photocatalytic nanostructures.

Keywords: surface defect, CeO₂₋ₓ, photocatalytic, water treatment, H₂ production

Procedia PDF Downloads 127
2797 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 97
2796 Considering International/Local Peacebuilding Partnerships: The Stoplights Analysis System

Authors: Charles Davidson

Abstract:

This paper presents the Stoplight Analysis System of Partnering Organizations Readiness, offering a structured framework to evaluate conflict resolution collaboration feasibility, especially crucial in conflict areas, employing a colour-coded approach and specific assessment points, with implications for more informed decision-making and improved outcomes in peacebuilding initiatives. Derived from at total of 40 years of practical peacebuilding experience from the project’s two researchers as well as interviews of various other peacebuilding actors, this paper introduces the Stoplight Analysis System of Partnering Organizations Readiness, a comprehensive framework designed to facilitate effective collaboration in international/local peacebuilding partnerships by evaluating the readiness of both potential partner organisations and the location of the proposed project. ^The system employs a colour-coded approach, categorising potential partnerships into three distinct indicators: Red (no-go), Yellow (requires further research), and Green (promising, go ahead). Within each category, specific points are identified for assessment, guiding decision-makers in evaluating the feasibility and potential success of collaboration. The Red category signals significant barriers, prompting an immediate stoppage in the consideration of partnership. The Yellow category encourages deeper investigation to determine whether potential issues can be mitigated, while the Green category signifies organisations deemed ready for collaboration. This systematic and structured approach empowers decision-makers to make informed choices, enhancing the likelihood of successful and mutually beneficial partnerships. Methodologically, this paper utilised interviews from peacebuilders from around the globe, scholarly research of extant strategies, and a collaborative review of programming from the project’s two authors from their own time in the field. This method as a formalised model has been employed for the past two years across a litany of partnership considerations, and has been adjusted according to its field experimentation. This research holds significant importance in the field of conflict resolution as it provides a systematic and structured approach to peacebuilding partnership evaluation. In conflict-affected regions, where the dynamics are complex and challenging, the Stoplight Analysis System offers decision-makers a practical tool to assess the readiness of partnering organisations. This approach can enhance the efficiency of conflict resolution efforts by ensuring that resources are directed towards partnerships with a higher likelihood of success, ultimately contributing to more effective and sustainable peacebuilding outcomes.

Keywords: collaboration, conflict resolution, partnerships, peacebuilding

Procedia PDF Downloads 54
2795 Climate Change, Women's Labour Markets and Domestic Work in Mexico

Authors: Luis Enrique Escalante Ochoa

Abstract:

This paper attempts to assess the impacts of Climate change (CC) on inequalities in the labour market. CC will have the most serious effects on some vulnerable economic sectors, such as agriculture, livestock or tourism, but also on the most vulnerable population groups. The objective of this research is to evaluate the impact of CC on the labour market and particularly on Mexican women. Influential documents such as the synthesis reports produced by the Intergovernmental Panel on Climate Change (IPCC) in 2007 and 2014 revived a global effort to counteract the effects of CC, called for an analysis of the impacts on vulnerable socio-economic groups and on economic activities, and for the development of decision-making tools to enable policy and other decisions based on the complexity of the world in relation to climate change, taking into account socio-economic attributes. We follow up this suggestion and determine the impact of CC on vulnerable populations in the Mexican labour market, taking into account two attributes (gender and level of qualification of workers). Most studies have focused on the effects of CC on the agricultural sector, as it is considered a highly vulnerable economic sector to the effects of climate variability. This research seeks to contribute to the existing literature taking into account, in addition to the agricultural sector, other sectors such as tourism, water availability, and energy that are of vital importance to the Mexican economy. Likewise, the effects of climate change will be extended to the labour market and specifically to women who in some cases have been left out. The studies are sceptical about the impact of CC on the female labour market because of the perverse effects on women's domestic work, which are too often omitted from analyses. This work will contribute to the literature by integrating domestic work, which in the case of Mexico is much higher among women than among men (80.9% vs. 19.1%), according to the 2009 time use survey. This study is relevant since it will allow us to analyse impacts of climate change not only in the labour market of the formal economy, but also in the non-market sphere. Likewise, we consider that including the gender dimension is valid for the Mexican economy as it is a country with high degrees of gender inequality in the labour market. In the OECD economic study for Mexico (2017), the low labour participation of Mexican women is highlighted. Although participation has increased substantially in recent years (from 36% in 1990 to 47% in 2017), it remains low compared to the OECD average where women participate around 70% of the labour market. According to Mexico's 2009 time use survey, domestic work represents about 13% of the total time available. Understanding the interdependence between the market and non-market spheres, and the gender division of labour within them is the necessary premise for any economic analysis aimed at promoting gender equality and inclusive growth.

Keywords: climate change, labour market, domestic work, rural sector

Procedia PDF Downloads 120
2794 Economic Decision Making under Cognitive Load: The Role of Numeracy and Financial Literacy

Authors: Vânia Costa, Nuno De Sá Teixeira, Ana C. Santos, Eduardo Santos

Abstract:

Financial literacy and numeracy have been regarded as paramount for rational household decision making in the increasing complexity of financial markets. However, financial decisions are often made under sub-optimal circumstances, including cognitive overload. The present study aims to clarify how financial literacy and numeracy, taken as relevant expert knowledge for financial decision-making, modulate possible effects of cognitive load. Participants were required to perform a choice between a sure loss or a gambling pertaining a financial investment, either with or without a competing memory task. Two experiments were conducted varying only the content of the competing task. In the first, the financial choice task was made while maintaining on working memory a list of five random letters. In the second, cognitive load was based upon the retention of six random digits. In both experiments, one of the items in the list had to be recalled given its serial position. Outcomes of the first experiment revealed no significant main effect or interactions involving cognitive load manipulation and numeracy and financial literacy skills, strongly suggesting that retaining a list of random letters did not interfere with the cognitive abilities required for financial decision making. Conversely, and in the second experiment, a significant interaction between the competing mnesic task and level of financial literacy (but not numeracy) was found for the frequency of choice of a gambling option. Overall, and in the control condition, both participants with high financial literacy and high numeracy were more prone to choose the gambling option. However, and when under cognitive load, participants with high financial literacy were as likely as their illiterate counterparts to choose the gambling option. This outcome is interpreted as evidence that financial literacy prevents intuitive risk-aversion reasoning only under highly favourable conditions, as is the case when no other task is competing for cognitive resources. In contrast, participants with higher levels of numeracy were consistently more prone to choose the gambling option in both experimental conditions. These results are discussed in the light of the opposition between classical dual-process theories and fuzzy-trace theories for intuitive decision making, suggesting that while some instances of expertise (as numeracy) are prone to support easily accessible gist representations, other expert skills (as financial literacy) depend upon deliberative processes. It is furthermore suggested that this dissociation between types of expert knowledge might depend on the degree to which they are generalizable across disparate settings. Finally, applied implications of the present study are discussed with a focus on how it informs financial regulators and the importance and limits of promoting financial literacy and general numeracy.

Keywords: decision making, cognitive load, financial literacy, numeracy

Procedia PDF Downloads 166
2793 Highlighting Strategies Implemented by Migrant Parents to Support Their Child's Educational and Academic Success in the Host Society

Authors: Josee Charette

Abstract:

The academic and educational success of migrant students is a current issue in education, especially in western societies such in the province of Quebec, in Canada. For people who immigrate with school-age children, the success of the family’s migratory project is often measured by the benefits drawn by children from the educational institutions of their host society. In order to support the academic achievement of their children, migrant parents try to develop practices that derive from their representations of school and related challenges inspired by the socio-cultural context of their country of origin. These findings lead us to the following question: How does strategies implemented by migrant parents to manage the representational distance between school of their country of origin and school of their host society support or not the academic and educational success of their child? In the context of a qualitative exploratory approach, we have made interviews in the French , English and Spanish languages with 32 newly immigrated parents and 10 of their children. Parents were invited to complete a network of free associations about «School in Quebec» as a premise for the interview. The objective of this paper is to present strategies implemented by migrant parents to manage the distance between their representations of schools in their country of origin and in the host society, and to explore the influence of this management on their child’s academic and educational trajectories. Data analysis led us to develop various types of strategies, such as continuity, adaptation, resources mobilization, compensation and "return to basics" strategies. These strategies seem to be part of a continuum from oppositional-conflict scenario, in which parental strategies act as a risk factor, to conciliator-integrator scenario, in which parental strategies act as a protective factor for migrant students’ academic and educational success. In conclusion, we believe that our research helps in highlighting strategies implemented by migrant parents to support their child’s academic and educational success in the host society and also helps in providing a more efficient support to migrant parents and contributes to develop a wider portrait of migrant students’ academic achievement.

Keywords: academic and educational achievement of immigrant students, family’s migratory project, immigrants parental strategies, representational distance between school of origin and school of host society

Procedia PDF Downloads 438