Search results for: spectroscopic line imaging
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4041

Search results for: spectroscopic line imaging

441 H2 Permeation Properties of a Catalytic Membrane Reactor in Methane Steam Reforming Reaction

Authors: M. Amanipour, J. Towfighi, E. Ganji Babakhani, M. Heidari

Abstract:

Cylindrical alumina microfiltration membrane (GMITM Corporation, inside diameter=9 mm, outside diameter=13 mm, length= 50 mm) with an average pore size of 0.5 micrometer and porosity of about 0.35 was used as the support for membrane reactor. This support was soaked in boehmite sols, and the mean particle size was adjusted in the range of 50 to 500 nm by carefully controlling hydrolysis time, and calcined at 650 °C for two hours. This process was repeated with different boehmite solutions in order to achieve an intermediate layer with an average pore size of about 50 nm. The resulting substrate was then coated with a thin and dense layer of silica by counter current chemical vapour deposition (CVD) method. A boehmite sol with 10 wt.% of nickel which was prepared by a standard procedure was used to make the catalytic layer. BET, SEM, and XRD analysis were used to characterize this layer. The catalytic membrane reactor was placed in an experimental setup to evaluate the permeation and hydrogen separation performance for a steam reforming reaction. The setup consisted of a tubular module in which the membrane was fixed, and the reforming reaction occurred at the inner side of the membrane. Methane stream, diluted with nitrogen, and deionized water with a steam to carbon (S/C) ratio of 3.0 entered the reactor after the reactor was heated up to 500 °C with a specified rate of 2 °C/ min and the catalytic layer was reduced at presence of hydrogen for 2.5 hours. Nitrogen flow was used as sweep gas through the outer side of the reactor. Any liquid produced was trapped and separated at reactor exit by a cold trap, and the produced gases were analyzed by an on-line gas chromatograph (Agilent 7890A) to measure total CH4 conversion and H2 permeation. BET analysis indicated uniform size distribution for catalyst with average pore size of 280 nm and average surface area of 275 m2.g-1. Single-component permeation tests were carried out for hydrogen, methane, and carbon dioxide at temperature range of 500-800 °C, and the results showed almost the same permeance and hydrogen selectivity values for hydrogen as the composite membrane without catalytic layer. Performance of the catalytic membrane was evaluated by applying membranes as a membrane reactor for methane steam reforming reaction at gas hourly space velocity (GHSV) of 10,000 h−1 and 2 bar. CH4 conversion increased from 50% to 85% with increasing reaction temperature from 600 °C to 750 °C, which is sufficiently above equilibrium curve at reaction conditions, but slightly lower than membrane reactor with packed nickel catalytic bed because of its higher surface area compared to the catalytic layer.

Keywords: catalytic membrane, hydrogen, methane steam reforming, permeance

Procedia PDF Downloads 247
440 Pooled Analysis of Three School-Based Obesity Interventions in a Metropolitan Area of Brazil

Authors: Rosely Sichieri, Bruna K. Hassan, Michele Sgambato, Barbara S. N. Souza, Rosangela A. Pereira, Edna M. Yokoo, Diana B. Cunha

Abstract:

Obesity is increasing at a fast rate in low and middle-income countries where few school-based obesity interventions have been conducted. Results of obesity prevention studies are still inconclusive mainly due to underestimation of sample size in cluster-randomized trials and overestimation of changes in body mass index (BMI). The pooled analysis in the present study overcomes these design problems by analyzing 4,448 students (mean age 11.7 years) from three randomized behavioral school-based interventions, conducted in public schools of the metropolitan area of Rio de Janeiro, Brazil. The three studies focused on encouraging students to change their drinking and eating habits over one school year, with monthly 1-h sessions in the classroom. Folders explaining the intervention program and suggesting the participation of the family, such as reducing the purchase of sodas were sent home. Classroom activities were delivered by research assistants in the first two interventions and by the regular teachers in the third one, except for culinary class aimed at developing cooking skills to increase healthy eating choices. The first intervention was conducted in 2005 with 1,140 fourth graders from 22 public schools; the second, with 644 fifth graders from 20 public schools in 2010; and the last one, with 2,743 fifth and sixth graders from 18 public schools in 2016. The result was a non-significant change in BMI after one school year of positive changes in dietary behaviors associated with obesity. Pooled intention-to-treat analysis using linear mixed models was used for the overall and subgroup analysis by BMI status, sex, and race. The estimated mean BMI changes were from 18.93 to 19.22 in the control group and from 18.89 to 19.19 in the intervention group; with a p-value of change over time of 0.94. Control and intervention groups were balanced at baseline. Subgroup analyses were statistically and clinically non-significant, except for the non-overweight/obese group with a 0.05 reduction of BMI comparing the intervention with control. In conclusion, this large pooled analysis showed a very small effect on BMI only in the normal weight students. The results are in line with many of the school-based initiatives that have been promising in relation to modifying behaviors associated with obesity but of no impact on excessive weight gain. Changes in BMI may require great changes in energy balance that are hard to achieve in primary prevention at school level.

Keywords: adolescents, obesity prevention, randomized controlled trials, school-based study

Procedia PDF Downloads 147
439 A Comparison between TM: TM Co Doped and TM: RE Co Doped ZnO Based Advanced Materials for Spintronics Applications; Structural, Optical and Magnetic Property Analysis

Authors: V. V. Srinivasu, Jayashree Das

Abstract:

Owing to the industrial and technological importance, transition metal (TM) doped ZnO has been widely chosen for many practical applications in electronics and optoelectronics. Besides, though still a controversial issue, the reported room temperature ferromagnetism in transition metal doped ZnO has added a feather to its excellence and importance in current semiconductor research for prospective application in Spintronics. Anticipating non controversial and improved optical and magnetic properties, we adopted co doping method to synthesise polycrystalline Mn:TM (Fe,Ni) and Mn:RE(Gd,Sm) co doped ZnO samples by solid state sintering route with compositions Zn1-x (Mn:Fe/Ni)xO and Zn1-x(Mn:Gd/Sm)xO and sintered at two different temperatures. The structure, composition and optical changes induced in ZnO due to co doping and sintering were investigated by XRD, FTIR, UV, PL and ESR studies. X-ray peak profile analysis (XPPA) and Williamson-Hall analysis carried out shows changes in the values of stress, strain, FWHM and the crystallite size in both the co doped systems. FTIR spectra also show the effect of both type of co doping on the stretching and bending bonds of ZnO compound. UV-Vis study demonstrates changes in the absorption band edge as well as the significant change in the optical band gap due to exchange interactions inside the system after co doping. PL studies reveal effect of co doping on UV and visible emission bands in the co doped systems at two different sintering temperatures, indicating the existence of defects in the form of oxygen vacancies. While the TM: TM co doped samples of ZnO exhibit ferromagnetism at room temperature, the TM: RE co doped samples show paramagnetic behaviour. The magnetic behaviours observed are supported by results from Electron Spin resonance (ESR) study; which shows sharp resonance peaks with considerable line width (∆H) and g values more than 2. Such values are usually found due to the presence of an internal field inside the system giving rise to the shift of resonance field towards the lower field. The g values in this range are assigned to the unpaired electrons trapped in oxygen vacancies. TM: TM co doped ZnO samples exhibit low field absorption peaks in their ESR spectra, which is a new interesting observation. We emphasize that the interesting observations reported in this paper may be considered for the improved futuristic applications of ZnO based materials.

Keywords: co-doping, electro spin resonance, microwave absorption, spintronics

Procedia PDF Downloads 324
438 Photocatalytic Disintegration of Naphthalene and Naphthalene Similar Compounds in Indoors Air

Authors: Tobias Schnabel

Abstract:

Naphthalene and naphthalene similar compounds are a common problem in the indoor air of buildings from the 1960s and 1970s in Germany. Often tar containing roof felt was used under the concrete floor to prevent humidity to come through the floor. This tar containing roof felt has high concentrations of PAH (Polycyclic aromatic hydrocarbon) and naphthalene. Naphthalene easily evaporates and contaminates the indoor air. Especially after renovations and energetically modernization of the buildings, the naphthalene concentration rises because no forced air exchange can happen. Because of this problem, it is often necessary to change the floors after renovation of the buildings. The MFPA Weimar (Material research and testing facility) developed in cooperation a project with LEJ GmbH and Reichmann Gebäudetechnik GmbH. It is a technical solution for the disintegration of naphthalene in naphthalene, similar compounds in indoor air with photocatalytic reforming. Photocatalytic systems produce active oxygen species (hydroxyl radicals) through trading semiconductors on a wavelength of their bandgap. The light energy separates the charges in the semiconductor and produces free electrons in the line tape and defect electrons. The defect electrons can react with hydroxide ions to hydroxyl radicals. The produced hydroxyl radicals are a strong oxidation agent, and can oxidate organic matter to carbon dioxide and water. During the research, new titanium oxide catalysator surface coatings were developed. This coating technology allows the production of very porous titan oxide layer on temperature stable carrier materials. The porosity allows the naphthalene to get easily absorbed by the surface coating, what accelerates the reaction of the heterogeneous photocatalysis. The photocatalytic reaction is induced by high power and high efficient UV-A (ultra violet light) Leds with a wavelength of 365nm. Various tests in emission chambers and on the reformer itself show that a reduction of naphthalene in important concentrations between 2 and 250 µg/m³ is possible. The disintegration rate was at least 80%. To reduce the concentration of naphthalene from 30 µg/m³ to a level below 5 µg/m³ in a usual 50 ² classroom, an energy of 6 kWh is needed. The benefits of the photocatalytic indoor air treatment are that every organic compound in the air can be disintegrated and reduced. The use of new photocatalytic materials in combination with highly efficient UV leds make a safe and energy efficient reduction of organic compounds in indoor air possible. At the moment the air cleaning systems take the step from prototype stage into the usage in real buildings.

Keywords: naphthalene, titandioxide, indoor air, photocatalysis

Procedia PDF Downloads 134
437 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 121
436 A Comparative Study of the Impact of Membership in International Climate Change Treaties and the Environmental Kuznets Curve (EKC) in Line with Sustainable Development Theories

Authors: Mojtaba Taheri, Saied Reza Ameli

Abstract:

In this research, we have calculated the effect of membership in international climate change treaties for 20 developed countries based on the human development index (HDI) and compared this effect with the process of pollutant reduction in the Environmental Kuznets Curve (EKC) theory. For this purpose, the data related to The real GDP per capita with 2010 constant prices is selected from the World Development Indicators (WDI) database. Ecological Footprint (ECOFP) is the amount of biologically productive land needed to meet human needs and absorb carbon dioxide emissions. It is measured in global hectares (gha), and the data retrieved from the Global Ecological Footprint (2021) database will be used, and we will proceed by examining step by step and performing several series of targeted statistical regressions. We will examine the effects of different control variables, including Energy Consumption Structure (ECS) will be counted as the share of fossil fuel consumption in total energy consumption and will be extracted from The United States Energy Information Administration (EIA) (2021) database. Energy Production (EP) refers to the total production of primary energy by all energy-producing enterprises in one country at a specific time. It is a comprehensive indicator that shows the capacity of energy production in the country, and the data for its 2021 version, like the Energy Consumption Structure, is obtained from (EIA). Financial development (FND) is defined as the ratio of private credit to GDP, and to some extent based on the stock market value, also as a ratio to GDP, and is taken from the (WDI) 2021 version. Trade Openness (TRD) is the sum of exports and imports of goods and services measured as a share of GDP, and we use the (WDI) data (2021) version. Urbanization (URB) is defined as the share of the urban population in the total population, and for this data, we used the (WDI) data source (2021) version. The descriptive statistics of all the investigated variables are presented in the results section. Related to the theories of sustainable development, Environmental Kuznets Curve (EKC) is more significant in the period of study. In this research, we use more than fourteen targeted statistical regressions to purify the net effects of each of the approaches and examine the results.

Keywords: climate change, globalization, environmental economics, sustainable development, international climate treaty

Procedia PDF Downloads 60
435 A Study on the Current State and Policy Implications of Engineer Operated National Research Facility and Equipment in Korea

Authors: Chang-Yong Kim, Dong-Woo Kim, Whon-Hyun Lee, Yong-Joo Kim, Tae-Won Chung, Kyung-Mi Lee, Han-Sol Kim, Eun-Joo Lee, Euh Duck Jeong

Abstract:

In the past, together with the annual increase in investment on national R&D projects, the government’s budget investment in FE has steadily maintained. In the case of major developed countries, R&D and its supporting works are distinguished and professionalized in their own right, in so far as having a training system for facilities, equipment operation, and maintenance personnel. In Korea, however, research personnel conduct both research and equipment operation, leading to quantitative shortages of operational manpower and qualitative problems due to insecure employment such as maintenance issues or the loss of effectiveness of necessary equipment. Therefore, the purpose of this study was to identify the current status of engineer operated national research FE in Korea based on a 2017 survey results of domestic facilities and to suggest policy implications. A total of 395 research institutes that carried out national R&D projects and registered more than two FE since 2005 were surveyed on-line for two months. The survey showed that 395 non-profit research facilities were operating 45,155 pieces of equipment with 2,211 engineer operated national research FE, meaning that each engineer had to manage 21 items of FE. Among these, 43.9% of the workers were employed in temporary positions, including indefinite term contracts. Furthermore, the salary and treatment of the engineer personnel were relatively low compared to researchers. In short, engineers who exclusively focused on managing and maintaining FE play a very important role in increasing research immersion and obtaining highly reliable research results. Moreover, institutional efforts and government support for securing operators are severely lacking as domestic national R&D policies are mostly focused on researchers. The 2017 survey on FE also showed that 48.1% of all research facilities did not even employ engineers. In order to solve the shortage of the engineer personnel, the government will start the pilot project in 2012, and then only the 'research equipment engineer training project' from 2013. Considering the above, a national long-term manpower training plan that addresses the quantitative and qualitative shortage of operators needs to be established through a study of the current situation. In conclusion, the findings indicate that this should not only include a plan which connects training to employment but also measures the creation of additional jobs by re-defining and re-establishing operator roles and improving working conditions.

Keywords: engineer, Korea, maintenance, operation, research facilities and equipment

Procedia PDF Downloads 176
434 Effects of Evening vs. Morning Training on Motor Skill Consolidation in Morning-Oriented Elderly

Authors: Maria Korman, Carmit Gal, Ella Gabitov, Avi Karni

Abstract:

The main question addressed in this study was whether the time-of-day wherein training is afforded is a significant factor for motor skill ('how-to', procedural knowledge) acquisition and consolidation into long term memory in the healthy elderly population. Twenty-nine older adults (60-75 years) practiced an explicitly instructed 5-element key-press sequence by repeatedly generating the sequence ‘as fast and accurately as possible’. Contribution of three parameters to acquisition, 24h post-training consolidation, and 1-week retention gains in motor sequence speed was assessed: (a) time of training (morning vs. evening group) (b) sleep quality (actigraphy) and (c) chronotype. All study participants were moderately morning type, according to the Morningness-Eveningness Questionnaire score. All participants had sleep patterns typical of age, with average sleep efficiency of ~ 82%, and approximately 6 hours of sleep. Speed of motor sequence performance in both groups improved to a similar extent during training session. Nevertheless, evening group expressed small but significant overnight consolidation phase gains, while morning group showed only maintenance of performance level attained at the end of training. By 1-week retention test, both groups showed similar performance levels with no significant gains or losses with respect to 24h test. Changes in the tapping patterns at 24h and 1-week post-training were assessed based on normalized Pearson correlation coefficients using the Fisher’s z-transformation in reference to the tapping pattern attained at the end of the training. Significant differences between the groups were found: the evening group showed larger changes in tapping patterns across the consolidation and retention windows. Our results show that morning-oriented older adults effectively acquired, consolidated, and maintained a new sequence of finger movements, following both morning and evening practice sessions. However, time-of-training affected the time-course of skill evolution in terms of performance speed, as well as the re-organization of tapping patterns during the consolidation period. These results are in line with the notion that motor training preceding a sleep interval may be beneficial for the long-term memory in the elderly. Evening training should be considered an appropriate time window for motor skill learning in older adults, even in individuals with morning chronotype.

Keywords: time-of-day, elderly, motor learning, memory consolidation, chronotype

Procedia PDF Downloads 124
433 Building up Regional Innovation Systems (RIS) for Development: The Case Study of the State of Mexico, México

Authors: Jose Luis Solleiro, Rosario Castanon, Laura Elena Martinez

Abstract:

The State of Mexico is an administrative entity of Mexico, and it is one of the most important territories due to its great economic and social impact for the whole country, especially since it contributes with more than eight of the national Gross Domestic Product (GDP). The State of Mexico has a population of over seventeen million people and host very important business and productive industries such as Automotive, Chemicals, Pharmaceutical, and Agri-food. In 2017, the State Development Plan (Plan Estatal de Desarrollo in Spanish) which is a policy document that rules State's economic actions and integrates the bases for sectoral and regional programs to achieve regional development), raised innovation as a key aspect to boost competitiveness and productivity of the State of Mexico. Therefore, in line with this proposal, in 2018 the Mexican Council for Science and Technology (COMECYT for its acronym in Spanish), an institution in charge of promoting public science and technology policies in the State of Mexico, took actions towards building up the State´s Innovation System. Hence, the main objective of this paper is to review and analyze the process to create RIS in the State of Mexico. We focus on the key elements of the process, the diverse actors that were involved in it, the activities that were carried out and the identification of the challenges, findings, successes, and failures of the intended exercise. The methodology used to analyze the structure of the Innovation System of the State of Mexico is based on two elements: the case study and the research-action approach. The main objective of the paper, the case study was based on semi-structured interviews with key actors who have participated in the process of launching the RIS of the State of Mexico. Additionally, we analyzed the information reports and other documents that were elaborated during the process of shaping the State's innovation system. Finally, the results obtained in the process were also examined. The relevance of this investigation fundamentally rests in two elements: 1) keeping documental record of the process of building a RIS in Mexico; and 2) carrying out the analysis of this case study recognizing the importance of knowledge extraction and dissemination, so that lessons on this matter may be useful for similar experiences in the future. We conclude that in Mexico, documentation and analysis efforts related to the formation of RIS and interaction processes between innovation ecosystem actors are scarce, so documents like are of great importance, especially since it generates a series of findings and recommendations for the building of RIS.

Keywords: regional innovation systems, innovation, development, competitiveness

Procedia PDF Downloads 107
432 Development of a Stable RNAi-Based Biological Control for Sheep Blowfly Using Bentonite Polymer Technology

Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody

Abstract:

Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls and the parasite has developed resistance to nearly all control chemicals used in the past. It is therefore critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.

Keywords: flystrike, RNA interference, bentonite polymer technology, Lucillia cuprina

Procedia PDF Downloads 78
431 Students' ExperiEnce Enhancement Through Simulaton. A Process Flow in Logistics and Transportation Field

Authors: Nizamuddin Zainuddin, Adam Mohd Saifudin, Ahmad Yusni Bahaudin, Mohd Hanizan Zalazilah, Roslan Jamaluddin

Abstract:

Students’ enhanced experience through simulation is a crucial factor that brings reality to the classroom. The enhanced experience is all about developing, enriching and applications of a generic process flow in the field of logistics and transportations. As educational technology has improved, the effective use of simulations has greatly increased to the point where simulations should be considered a valuable, mainstream pedagogical tool. Additionally, in this era of ongoing (some say never-ending) assessment, simulations offer a rich resource for objective measurement and comparisons. Simulation is not just another in the long line of passing fads (or short-term opportunities) in educational technology. It is rather a real key to helping our students understand the world. It is a way for students to acquire experience about how things and systems in the world behave and react, without actually touching them. In short, it is about interactive pretending. Simulation is all about representing the real world which includes grasping the complex issues and solving intricate problems. Therefore, it is crucial before stimulate the real process of inbound and outbound logistics and transportation a generic process flow shall be developed. The paper will be focusing on the validization of the process flow by looking at the inputs gains from the sample. The sampling of the study focuses on multi-national and local manufacturing companies, third party companies (3PL) and government agency, which are selected in Peninsular Malaysia. A simulation flow chart was proposed in the study that will be the generic flow in logistics and transportation. A qualitative approach was mainly conducted to gather data in the study. It was found out from the study that the systems used in the process of outbound and inbound are System Application Products (SAP) and Material Requirement Planning (MRP). Furthermore there were some companies using Enterprises Resources Planning (ERP) and Electronic Data Interchange (EDI) as part of the Suppliers Own Inventories (SOI) networking as a result of globalized business between one countries to another. Computerized documentations and transactions were all mandatory requirement by the Royal Custom and Excise Department. The generic process flow will be the basis of developing a simulation program that shall be used in the classroom with the objective of further enhanced the students’ learning experience. Thus it will contributes to the body of knowledge on the enrichment of the student’s employability and also shall be one of the way to train new workers in the logistics and transportation filed.

Keywords: enhancement, simulation, process flow, logistics, transportation

Procedia PDF Downloads 320
430 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 130
429 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry

Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard

Abstract:

Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.

Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor

Procedia PDF Downloads 321
428 Nursing Professionals’ Perception of the Work Environment, Safety Climate and Job Satisfaction in the Brazilian Hospitals during the COVID-19 Pandemic

Authors: Ana Claudia de Souza Costa, Beatriz de Cássia Pinheiro Goulart, Karine de Cássia Cavalari, Henrique Ceretta Oliveira, Edineis de Brito Guirardello

Abstract:

Background: During the COVID-19 pandemic, nursing represents the largest category of health professionals who were on the front line. Thus, investigating the practice environment and the job satisfaction of nursing professionals during the pandemic becomes fundamental since it reflects on the quality of care and the safety climate. The aim of this study was to evaluate and compare the nursing professionals' perception of the work environment, job satisfaction, and safety climate of the different hospitals and work shifts during the COVID-19 pandemic. Method: This is a cross-sectional survey with 130 nursing professionals from public, private and mixed hospitals in Brazil. For data collection, was used an electronic form containing the personal and occupational variables, work environment, job satisfaction, and safety climate. The data were analyzed using descriptive statistics and ANOVA or Kruskal-Wallis tests according to the data distribution. The distribution was evaluated by means of the Shapiro-Wilk test. The analysis was done in the SPSS 23 software, and it was considered a significance level of 5%. Results: The mean age of the participants was 35 years (±9.8), with a mean time of 6.4 years (±6.7) of working experience in the institution. Overall, the nursing professionals evaluated the work environment as favorable; they were dissatisfied with their job in terms of pay, promotion, benefits, contingent rewards, operating procedures and satisfied with coworkers, nature of work, supervision, and communication, and had a negative perception of the safety climate. When comparing the hospitals, it was found that they did not differ in their perception of the work environment and safety climate. However, they differed with regard to job satisfaction, demonstrating that nursing professionals from public hospitals were more dissatisfied with their work with regard to promotion when compared to professionals from private (p=0.02) and mixed hospitals (p< 0.01) and nursing professionals from mixed hospitals were more satisfied than those from private hospitals (p= 0.04) with regard to supervision. Participants working in night shifts had the worst perception of the work environment related to nurse participation in hospital affairs (p= 0.02), nursing foundations for quality care (p= 0.01), nurse manager ability, leadership and support (p= 0.02), safety climate (p< 0.01), job satisfaction related to contingent rewards (p= 0.04), nature of work (p= 0.03) and supervision (p< 0.01). Conclusion: The nursing professionals had a favorable perception of the environment and safety climate but differed among hospitals regarding job satisfaction for the promotion and supervision domains. There was also a difference between the participants regarding the work shifts, being the night shifts, those with the lowest scores, except for satisfaction with operational conditions.

Keywords: health facility environment, job satisfaction, patient safety, nursing

Procedia PDF Downloads 142
427 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 338
426 Deregulation of Thorium for Room Temperature Superconductivity

Authors: Dong Zhao

Abstract:

Abstract—Extensive research on obtaining applicable room temperature superconductors meets the major barrier, and the record Tc of 135 K achieved via cuprate has been idling for decades. Even though, the accomplishment of higher Tc than the cuprate was made through pressurizing certain compounds composed of light elements, such as for the LaH10 and for the metallic hydrogen. Room temperature superconductivity under ambient pressure is still the preferred approach and is believed to be the ultimate solution for many applications. While racing to find the breakthrough method to achieve this room temperature Tc milestone in superconducting research, a report stated a discovery of a possible high-temperature superconductor, i.e., the thorium sulfide ThS. Apparently, ThS’s Tc can be at room temperature or even higher. This is because ThS revealed an unusual property of the ‘coexistence of high electrical conductivity and diamagnetism’. Noticed that this property of coexistence of high electrical conductivity and diamagnetism is in line with superconductors, meaning ThS is also at its superconducting state. Surprisingly, ThS owns the property of superconductivity at least at room temperature and under atmosphere pressure. Further study of the ThS’s electrical and magnetic properties in comparison with thorium di-iodide ThI2 concluded its molecular configuration as [Th4+(e-)2]S. This means the ThS’s cation is composed of a [Th4+(e-)2]2+ cation core. It is noticed that this cation core is built by an oxidation state +4 of thorium atom plus an electron pair on this thorium atom that resulted in an oxidation state +2 of this [Th4+(e-)2]2+ cation core. This special construction of [Th4+(e-)2]2+ cation core may lead to the ThS’s room temperature superconductivity because of this characteristic electron lone pair residing on the thorium atom. Since the study of thorium chemistry was carried out in the period of before 1970s. the exploration about ThS’s possible room temperature superconductivity would require resynthesizing ThS. This re-preparation of ThS will provide the sample and enable professionals to verify the ThS’s room temperature superconductivity. Regrettably, the current regulation prevents almost everyone from getting access to thorium metal or thorium compounds due to the radioactive nature of thorium-232 (Th-232), even though the radioactive level of Th-232 is extremely low with its half-life of 14.05 billion years. Consequently, further confirmation of ThS’s high-temperature superconductivity through experiments will be impossible unless the use of corresponding thorium metal and related thorium compounds can be deregulated. This deregulation would allow researchers to obtain the necessary starting materials for the study of ThS. Hopefully, the confirmation of ThS’s room temperature superconductivity can not only establish a method to obtain applicable superconductors but also to pave the way for fully understanding the mechanism of superconductivity.

Keywords: co-existence of high electrical conductivity and diamagnetism, electron pairing and electron lone pair, room temperature superconductivity, the special molecular configuration of thorium sulfide ThS

Procedia PDF Downloads 36
425 Variability of the X-Ray Sun during Descending Period of Solar Cycle 23

Authors: Zavkiddin Mirtoshev, Mirabbos Mirkamalov

Abstract:

We have analyzed the time series of full disk integrated soft X-ray (SXR) and hard X-ray (HXR) emission from the solar corona during 2004 January 1 to 2009 December 31, covering the descending phase of solar cycle 23. We employed the daily X-ray index (DXI) derived from X-ray observations from the Solar X-ray Spectrometer (SOXS) mission in four different energy bands: 4-5.5; 5.5-7.5 keV (SXR) and 15-20; 20-25 keV (HXR). The application of Lomb-Scargle periodogram technique to the DXI time series observed by the Silicium detector in the energy bands reveals several short and intermediate periodicities of the X-ray corona. The DXI explicitly show the periods of 13.6 days, 26.7 days, 128.5 days, 151 days, 180 days, 220 days, 270 days, 1.24 year and 1.54 year periods in SXR as well as in HXR energy bands. Although all periods are above 70% confidence level in all energy bands, they show strong power in HXR emission in comparison to SXR emission. These periods are distinctly clear in three bands but somehow not unambiguously clear in 5.5-7.5 keV band. This might be due to the presence of Ferrum and Ferrum/Niccolum line features, which frequently vary with small scale flares like micro-flares. The regular 27-day rotation and 13.5 day period of sunspots from the invisible side of the Sun are found stronger in HXR band relative to SXR band. However, flare activity Rieger periods (150 and 180 days) and near Rieger period 220 days are very strong in HXR emission which is very much expected. On the other hand, our current study reveals strong 270 day periodicity in SXR emission which may be connected with tachocline, similar to a fundamental rotation period of the Sun. The 1.24 year and 1.54 year periodicities, represented from the present research work, are well observable in both SXR as well as in HXR channels. These long-term periodicities must also have connection with tachocline and should be regarded as a consequence of variation in rotational modulation over long time scales. The 1.24 year and 1.54 year periods are also found great importance and significance in the life formation and it evolution on the Earth, and therefore they also have great astro-biological importance. We gratefully acknowledge support by the Indian Centre for Space Science and Technology Education in Asia and the Pacific (CSSTEAP, the Centre is affiliated to the United Nations), Physical Research Laboratory (PRL) at Ahmedabad, India. This work has done under the supervision of Prof. Rajmal Jain and paper consist materials of pilot project and research part of the M. Tech program which was made during Space and Atmospheric Science Course.

Keywords: corona, flares, solar activity, X-ray emission

Procedia PDF Downloads 335
424 Substitutional Inference in Poetry: Word Choice Substitutions Craft Multiple Meanings by Inference

Authors: J. Marie Hicks

Abstract:

The art of the poetic conjoins meaning and symbolism with imagery and rhythm. Perhaps the reader might read this opening sentence as 'The art of the poetic combines meaning and symbolism with imagery and rhythm,' which holds a similar message, but is not quite the same. The reader understands that these factors are combined in this literary form, but to gain a sense of the conjoining of these factors, the reader is forced to consider that these aspects of poetry are not simply combined, but actually adjoin, abut, skirt, or touch in the poetic form. This alternative word choice is an example of substitutional inference. Poetry is, ostensibly, a literary form where language is used precisely or creatively to evoke specific images or emotions for the reader. Often, the reader can predict a coming rhyme or descriptive word choice in a poem, based on previous rhyming pattern or earlier imagery in the poem. However, there are instances when the poet uses an unexpected word choice to create multiple meanings and connections. In these cases, the reader is presented with an unusual phrase or image, requiring that they think about what that image is meant to suggest, and their mind also suggests the word they expected, creating a second, overlying image or meaning. This is what is meant by the term 'substitutional inference.' This is different than simply using a double entendre, a word or phrase that has two meanings, often one complementary and the other disparaging, or one that is innocuous and the other suggestive. In substitutional inference, the poet utilizes an unanticipated word that is either visually or phonetically similar to the expected word, provoking the reader to work to understand the poetic phrase as written, while unconsciously incorporating the meaning of the line as anticipated. In other words, by virtue of a word substitution, an inference of the logical word choice is imparted to the reader, while they are seeking to rationalize the word that was actually used. There is a substitutional inference of meaning created by the alternate word choice. For example, Louise Bogan, 4th Poet Laureate of the United States, used substitutional inference in the form of homonyms, malapropisms, and other unusual word choices in a number of her poems, lending depth and greater complexity, while actively engaging her readers intellectually with her poetry. Substitutional inference not only adds complexity to the potential interpretations of Bogan’s poetry, as well as the poetry of others, but provided a method for writers to infuse additional meanings into their work, thus expressing more information in a compact format. Additionally, this nuancing enriches the poetic experience for the reader, who can enjoy the poem superficially as written, or on a deeper level exploring gradations of meaning.

Keywords: poetic inference, poetic word play, substitutional inference, word substitution

Procedia PDF Downloads 220
423 Room Temperature Electron Spin Resonance and Raman Study of Nanocrystalline Zn(1-x)Cu(x)O (0.005 < x < 0.05) Synthesized by Pyrophoric Method

Authors: Jayashree Das, V. V. Srinivasu , D. K. Mishra, A. Maity

Abstract:

Owing to the important potential applications over decades, transition metal (TM: Mn, Fe, Ni, Cu, Cr, V etc.) doped ZnO-based diluted magnetic semiconductors (DMS) always attract research attention for more and newer investigations. One of the interesting aspects of these materials is to study and understand the magnetic property at room temperature properly, which is very crucial to select a material for any related application. In this regard, Electron spin resonance (ESR) study has been proven to be a powerful technique to investigate the spin dynamics of electrons inside the system, which are responsible for the magnetic behaviour of any system. ESR as well as the Raman and Photoluminescence spectroscopy studies are also helpful to study the defects present or created inside the system in the form of oxygen vacancy or cluster instrumental in determining the room temperature ferromagnetic property of transition metal doped ZnO system, which can be controlled through varying dopant concentration, appropriate synthesis technique and sintering of the samples. For our investigation, we synthesised Cu-doped ZnO nanocrystalline samples with composition Zn1-xCux ( 0.005< x < 0.05) by pyrophoric method and sintered at a low temperature of 650 0C. The microwave absorption is studied by the Electron Spin Resonance (ESR) of X-band (9.46 GHz) at room temperature. Systematic analysis of the obtained ESR spectra reveals that all the compositions of Cu-doped ZnO samples exhibit resonance signals of appreciable line widths and g value ~ 2.2, typical characteristic of ferromagnetism in the sample. Raman scattering and the photoluminescence study performed on the samples clearly indicated the presence of pronounced defect related peaks in the respective spectra. Cu doping in ZnO with varying concentration also observed to affect the optical band gap and the respective absorption edges in the UV-Vis spectra. FTIR spectroscopy reveals the Cu doping effect on the stretching bonds of ZnO. To probe into the structural and morphological changes incurred by Cu doping, we have performed XRD, SEM and EDX study, which confirms adequate Cu substitution without any significant impurity phase formation or lattice disorder. With proper explanation, we attempt to correlate the results observed for the structural optical and magnetic behaviour of the Cu-doped ZnO samples. We also claim that our result can be instrumental for appropriate applications of transition metal doped ZnO based DMS in the field of optoelectronics and Spintronics.

Keywords: diluted magnetic semiconductors, electron spin resonance, raman scattering, spintronics.

Procedia PDF Downloads 301
422 Other Cancers in Patients With Head and Neck Cancer

Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern

Abstract:

Introduction: Head and neck cancers (HNC) are often associated with the development of non-HNC primaries, as the risk factors that predispose patients to HNC are often risk factors for other cancers. Aim: We sought to evaluate whether there was an increased risk of smoking and alcohol-related cancers and also other cancers in HNC patients and to evaluate whether there is a difference between the rates of non-HNC primaries in Aboriginal compared with non-Aboriginal HNC patients. Methods: We performed a retrospective cohort analysis of 320 HNC patients from a single center in Western Australia, identifying 80 Aboriginal and 240 non-Aboriginal patients matched on a 1:3 ratio by sites, histology, rurality, and age. We collected data on the patient characteristics, tumour features, treatments, outcomes, and past and subsequent HNCs and non-HNC primaries. Results: In the overall study population, there were 86 patients (26.9%) with a metachronous or synchronous non-HNC primary. Non-HNC primaries were actually significantly more common in the non-Aboriginal population compared with the Aboriginal population (30% vs. 17.5%, p=0.02); however, half of these were patients with cutaneous squamous or basal cell carcinomas (cSCC/BCC) only. When cSCC/BCCs were excluded, non-Aboriginal patients had a similar rate as Aboriginal patients (16.7% vs. 15%, p=0.73). There were clearly more cSCC/BCCs in non-Aboriginal patients compared with Aboriginal patients (16.7% vs. 2.5%, p=0.001) and more patients with melanoma (2.5% vs. 0%, p value not significant (p=NS). Rates of most cancers were similar between non-Aboriginal and Aboriginal patients, including prostate (2.9% vs. 3.8%), colorectal (2.9% vs. 2.5%), kidney (1.2% vs. 1.2%), and these rates appeared comparable to Australian Age Standardised Incidence Rates (ASIR) in the general community. Oesophageal cancer occurred at double the rate in Aboriginal patients (3.8%) compared with non-Aboriginal patients (1.7%), which was far in excess of ASIRs which estimated a lifetime risk of 0.59% in the general population. Interestingly lung cancer rates did not appear to be significantly increased in our cohort, with 2.5% of Aboriginal patients and 3.3% of non-Aboriginal patients having lung cancer, which is in line with ASIRs which estimates a lifetime risk of 5% (by age 85yo). Interestingly the rate of Glioma in the non-Aboriginal population was higher than the ASIR, with 0.8% of non-Aboriginal patients developing Glioma, with Australian averages predicting a 0.6% lifetime risk in the general population. As these are small numbers, this finding may well be due to chance. Unsurprisingly, second HNCs occurred at an increased incidence in our cohort, in 12.5% of Aboriginal patients and 11.2% of non-Aboriginal patients, compared to an ASIR of 17 cases per 100,000 persons, estimating a lifetime risk of 1.70%. Conclusions: Overall, 26.9% of patients had a non-HNC primary. When cSCC/BCCs were excluded, Aboriginal and non-Aboriginal patients had similar rates of non-HNC primaries, although non-Aboriginal patients had a significantly higher rate of cSCC/BCCs. Aboriginal patients had double the rate of oesophageal primaries; however, this was not statistically significant, possibly due to small case numbers.

Keywords: head and neck cancer, synchronous and metachronous primaries, other primaries, Aboriginal

Procedia PDF Downloads 57
421 Farmers Willingness to Pay for Irrigated Maize Production in Rural Kenya

Authors: Dennis Otieno, Lilian Kirimi Nicholas Odhiambo, Hillary Bii

Abstract:

Kenya is considered to be a middle level income country and usuaaly does not meet household food security needs especially in North and South eastern parts. Approximately half of the population is living under the poverty line (www, CIA 1, 2012). Agriculture is the largest sector in the country, employing 80% of the population. These are thereby directly dependent on the sufficiency of outputs received. This makes efficient, easy-accessible and cheap agricultural practices an important matter in order to improve food security. Maize is the prime staple food commodity in Kenya and represents a substantial share of people’s nutritional intake. This study is the result of questionnaire based interviews, Key informant and focus group discussion involving 220 small scale maize farmers Kenyan. The study was located to two separated areas; Lower Kuja, Bunyala, Nandi, Lower Nzoia, Perkerra, Mwea Bura, Hola and Galana Kulalu in Kenya. The questionnaire captured the farmers’ use and perceived importance of the use irrigation services and irrigated maize production. Viability was evaluated using the four indices which were all positive with NPV giving positive cash flows in less than 21 years at most for one season output. The mean willingness to pay was found to be KES 3082 and willingness to pay increased with increase in irrigation premiums. The economic value of water was found to be greater than the willingness to pay implying that irrigated maize production is sustainable. Farmers stated that viability was influenced by high output levels, good produce quality, crop of choice, availability of sufficient water and enforcement the last two factors had a positive influence while the other had negative effect on the viability of irrigated maize. A regression was made over the correlation between the willingness to pay for irrigated maize production using scheme and plot level factors. Farmers that already use other inputs such as animal manure, hired labor and chemical fertilizer should also have a demand for improved seeds according to Liebig's law of minimum and expansion path theory. The regression showed that premiums, and high yields have a positive effect on willingness to pay while produce quality, efficient fertilizer use, and crop season have a negative effect.

Keywords: maize, food security, profits, sustainability, willingness to pay

Procedia PDF Downloads 208
420 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning

Authors: Jiahao Tian, Michael D. Porter

Abstract:

Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.

Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation

Procedia PDF Downloads 58
419 The Impact of Climate Change on Sustainable Aquaculture Production

Authors: Peyman Mosberian-Tanha, Mona Rezaei

Abstract:

Aquaculture sector is the fastest growing food sector with annual growth rate of about 10%. The sustainability of aquaculture production, however, has been debated mainly in relation to the feed ingredients used for farmed fish. The industry has been able to decrease its dependency on marine-based ingredients in line with policies for more sustainable production. As a result, plant-based ingredients have increasingly been incorporated in aquaculture feeds, especially in feeds for popular carnivorous species, salmonids. The effect of these ingredients on salmonids’ health and performance has been widely studied. In most cases, plant-based diets are associated with varying degrees of health and performance issues across salmonids, partly depending on inclusion levels of plant ingredients and the species in question. However, aquaculture sector is facing another challenge of concern. Environmental challenges in association with climate change is another issue the aquaculture sector must deal with. Data from trials in salmonids subjected to environmental challenges of various types show adverse physiological responses, partly in relation to stress. To date, there are only a limited number of studies reporting the interactive effects of adverse environmental conditions and dietary regimens on salmonids. These studies have shown that adverse environmental conditions exacerbate the detrimental effect of plant-based diets on digestive function and health in salmonids. This indicates an additional challenge for the aquaculture sector to grow in a sustainable manner. The adverse environmental conditions often studied in farmed fish is the change in certain water quality parameters such as oxygen and/or temperature that are typically altered in response to climate change and, more specifically, global warming. In a challenge study, we observed that the in the fish fed a plant-based diet, the fish’s ability to absorb dietary energy was further reduced when reared under low oxygen level. In addition, gut health in these fish was severely impaired. Some other studies also confirm the adverse effect of environmental challenge on fish’s gut health. These effects on the digestive function and gut health of salmonids may result in less resistance to diseases and weaker performance with significant economic and ethical implications. Overall, various findings indicate the multidimensional negative effects of climate change, as a major environmental issue, in different sectors, including aquaculture production. Therefore, a comprehensive evaluation of different ways to cope with climate change is essential for planning more sustainable strategies in aquaculture sector.

Keywords: aquaculture, climate change, sustainability, salmonids

Procedia PDF Downloads 174
418 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)

Authors: Ahmad Kayvani Fard, Yehia Manawi

Abstract:

Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.

Keywords: membrane distillation, waste heat, seawater desalination, membrane, freshwater, direct contact membrane distillation

Procedia PDF Downloads 217
417 Climate Change and Food Security in Nigeria: The World Bank Assisted Third National Fadama Development Programme (Nfdp Iii) Approach in Rivers State, Niger Delta, Nigeria

Authors: Temple Probyne Abali

Abstract:

Port Harcourt, Rivers State in the Niger Delta region of Nigeria is bedeviled by the phenomenon of climatechange, posing threat to food security and livelihood. This study examined a 4 decadel (1980-2020) trend of climate change as well as its socio-economic impact on food security in the region. Furthermore, to achieve sustainable food security and livelihood amidst the phenomenon, the study adopted the World Bank Assisted Third National Fadama Development Programme approach. The data source for climate change involved secondary data from Nigeria Meteorological Agency (NIMET). Consequently, the results for climate change over the 4decade period were displayed in tables, charts and maps for the expected changes. Data sources on socio-economic impact of food security and livelihood were acquired through questionnairedesign. A purposive random sampling technique was used in selecting 5 coastal communities inthe region known for viable economic potentials for agricultural development and the resultswere analyzed using Analysis of Variance (ANOVA). The Participatory Rural Appraisal (PRA) technique of the World Bank for needs assessment wasadopted in selecting 5 agricultural sub-project proposals/activities based on groups’ commoneconomic interest from a total of 1,000 farmers each drawn from the 5 communities of differentage groups including men, women, youths and the vulnerable. Based on the farmers’ sub-projectinterests, the various groups’ Strength, Weakness, Opportunities and Threats (SWOT), Problem Listing Matrix, Skill Gap Analysis as well as EIAson their sub-project proposals/activities were analyzed with substantialMonitoring and Evaluation (M & E), using the Specific, Measurable, Attribute, Reliable and Time bound (SMART)approach. Based on the findings from the PRA technique, the farmers recorded considerableincreaseinincomeofover200%withinthe5yearprojectplan(2008-2013).Thestudyrecommends capacity building and advisory services on this PRA innovation. By so doing, there would be a sustainable increase in agricultural production and assured food security in an environmental friendly manner, in line with the United Nation’s Sustainable Development Goals(SDGs).

Keywords: climate change, food security, fadama, world bank, agriculture, sdgs

Procedia PDF Downloads 80
416 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles

Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.

Keywords: mobile mapping, GNSS, IMU, similarity, classification

Procedia PDF Downloads 69
415 Fatty Acid Translocase (Cd36), Energy Substrate Utilization, and Insulin Signaling in Brown Adipose Tissue in Spontaneously Hypertensive Rats

Authors: Michal Pravenec, Miroslava Simakova, Jan Silhavy

Abstract:

Brown adipose tissue (BAT) plays an important role in lipid and glucose metabolism in rodents and possibly also in humans. Recently, using systems genetics approach in the BAT from BXH/HXB recombinant inbred strains, derived from the SHR (spontaneously hypertensive rat) and BN (Brown Norway) progenitors, we identified Cd36 (fatty acid translocase) as the hub gene of co-expression module associated with BAT relative weight and function. An important aspect of BAT biology is to better understand the mechanisms regulating the uptake and utilization of fatty acids and glucose. Accordingly, BAT function in the SHR that harbors mutant nonfunctional Cd36 variant (hereafter referred to as SHR-Cd36⁻/⁻) was compared with SHR transgenic line expressing wild type Cd36 under control of a universal promoter (hereafter referred to as SHR-Cd36⁺/⁺). BAT was incubated in media containing insulin and 14C-U-glucose alone or 14C-U-glucose together with palmitate. Incorporation of glucose into BAT lipids was significantly higher in SHR-Cd36⁺/⁺ versus SHR-Cd36⁻/⁻ rats when incubation media contained glucose alone (SHR-Cd36⁻/⁻ 591 ± 75 vs. SHR-Cd36⁺/⁺ 1036 ± 135 nmol/gl./2h; P < 0.005). Adding palmitate into incubation media had no effect in SHR-Cd36⁻/⁻ rats but significantly reduced glucose incorporation into BAT lipids in SHR-Cd36⁺/⁺ (SHR-Cd36⁻/⁻ 543 ± 55 vs. SHR-Cd36⁺/⁺ 766 ± 75 nmol/gl./2h; P < 0.05 denotes significant Cd36 x palmitate interaction determined by two-way ANOVA). This Cd36-dependent reduced glucose uptake in SHR-Cd36⁺/⁺ BAT was likely secondary to increased palmitate incorporation and utilization due to the presence of wild type Cd36 fatty acid translocase in transgenic rats. This possibility is supported by increased incorporation of 14C-U-palmitate into BAT lipids in the presence of both palmitate and glucose in incubation media (palmitate alone: SHR-Cd36⁻/⁻ 870 ± 21 vs. SHR-Cd36⁺/⁺ 899 ± 42; glucose+palmitate: SHR-Cd36⁻/⁻ 899 ± 47 vs. SHR-Cd36⁺/⁺ 1460 ± 111 nmol/palm./2h; P < 0.05 denotes significant Cd36 x glucose interaction determined by two-way ANOVA). It is possible that addition of glucose into the incubation media increased palmitate incorporation into BAT lipids in SHR-Cd36⁺/⁺ rats because of glucose availability for glycerol phosphate production and increased triglyceride synthesis. These changes in glucose and palmitate incorporation into BAT lipids were associated with significant differential expression of Irs1, Irs2, Slc2a4 and Foxo1 genes involved in insulin signaling and glucose metabolism only in SHR-Cd36⁺/⁺ rats which suggests Cd36-dependent effects on insulin action. In conclusion, these results provide compelling evidence that Cd36 plays an important role in BAT insulin signaling and energy substrate utilization.

Keywords: brown adipose tissue, Cd36, energy substrate utilization, insulin signaling, spontaneously hypertensive rat

Procedia PDF Downloads 129
414 A Study on the Personality Traits of Students Who Have Chosen Medicine as Their Career

Authors: Khairani Omar, Shalinawati Ramli, Nurul Azmawati Mohamed, Zarini Ismail, Nur Syahrina Rahim, Nurul Hayati Chamhuri

Abstract:

Choosing a career which matches a student’s personality traits is one of the key factors for future work satisfaction. This is because career satisfaction is at the highest when it is in line with one’s personality strength, values and attitudes. Personality traits play a major role in determining the success of a student in the medical course. In the pre-clinical years, medical theories are being emphasized, thus, conscientious students would perform better than those with lower level of this trait. As the emphasis changes in the clinical years during which patient interaction is important, personality traits which involved interpersonal values become more essential for success. The aim of this study was to determine the personality traits of students who had chosen medicine as their career. It was a cross-sectional study conducted at the Islamic Science University of Malaysia. The respondents consisted of 81 students whose age ranged between 20-21 years old. A set of personality assessment inventory index which has been validated for the local context was used to determine the students’ personality traits. The instrument assessed 15 personality traits namely: aggressive, analytical, autonomy, creativity, extrovert, intellectual, motivation, diversity, resiliency, self-criticism, control, helpful, support, structured and achievement. The scores ranged between 1-100%, and they were categorized into low (1-30%), moderate (40-60%) and high scores (70-100%). The respondents were Year 3 pre-clinical medical students and there were more female students (69%) compared to male students (31%). Majority of them were from middle-income families. Approximately 70% of both parents of the respondents had tertiary education. Majority of the students had high scores in autonomy, creativity, diversity, helpful, structured and achievement. In other words, more than 50% of them scored high (70-100%) in these traits. Scoring high in these traits was beneficial for the medical course. For aggressive trait, 54% of them had moderate scores which is compatible for medicine as this indicated an inclination to being assertive. In the analytical and intellectual components, only 40% and 25% had high scores respectively. These results contradicted the usual expectation of medical students whereby they are expected to be highly analytical and intellectual. It would be an added value if the students had high scores in being extrovert as this reflects on good interpersonal values, however, the students had approximately similar scores in all categories of this trait. Being resilient in the medical school is important as the course is difficult and demanding. The students had good scores in this component in which 46% had high scores while 39% had moderate scores. In conclusion, by understanding their personality traits, strengths and weaknesses, the students will have an opportunity to improve themselves in the areas they lack. This will help them to become better doctors in future.

Keywords: career, medical students, medicine, personality traits

Procedia PDF Downloads 281
413 Revolutions and Cyclic Patterns in Chinese Town Planning: The Case-Study of Shenzhen

Authors: Domenica Bona

Abstract:

Colin Chant and David Goodman argue that historians of Chinese pre-industrial cities tend to underestimate revolutions and overestimate cyclic patterns: periods of peace and prosperity in the earl part of each d nast , followed b peasants’ rebellions and upheavals. Boyd described these cyclic patterns as part of the background of Chinese town planning and architecture. Thus old ideals of city planning-square plan, southward orientation and a palace along the central axis - are revived again and again in the ascendant phases of several d nastic c cles (e.g. Chang’an, Kaifen, and Beijing). Along this line of thought, m paper questions the relationship between the “magic square rule” and modern Chinese urban- planning. As a matter of fact, the classical theme of “cosmic Taoist urbanism” is still a reference for planning cities and new urban developments, whenever there is the intention to express nationalist ideals and “cultural straightforwardness.” Besides, some case studies can be related to “modern d nasties”: the first Republic under the Kuo Min Tang, the red People’s Republic and the post-Maoist open country of Deng Xiao Ping. Considering the project for the new capital of Nanjing in the Thirties, Beijing’s Tianan Men area in the ifties, and Shenzhen’s utian CBD in late 20th century, I argue that cyclic patterns are still in place, though with deformations related to westernization, private interests and lack of spirituality. How far new Chinese cities are - or simply seem to be - westernized? Symbolism, invisible frameworks, repeating features and behavioural patterns make urban China just “superficiall” western. This can be well noticed in cities previousl occupied b foreigners, like Hong Kong, or in newly founded ones, like Shenzhen, where both Asians and non-Asian people can feel the gender-shift from New-York-like landscapes to something else. Current planning in main metropolitan areas shows a blurred relationship between public policies and private investments: two levels of decisions and actions, one addressing the larger scale and infrastructures, the other concerning the micro scale and development of single plots. While zoning is instrumental in this process, master plans are often laid out over a very poor cartography, so much that any relation between the formal characters of new cities and the centuries-old structure of the related territory gets lost.

Keywords: China, contemporary cities, cultural heritage, shenzhen, urban planning

Procedia PDF Downloads 349
412 Acceptability Process of a Congestion Charge

Authors: Amira Mabrouk

Abstract:

This paper deals with the acceptability of urban toll in Tunisia. The price-based regulation, i.e. urban toll, is the outcome of a political process hampered by three-fold objectives: effectiveness, equity and social acceptability. This produces both economic interest groups and functions that are of incongruent preferences. The plausibility of this speculation goes hand in hand with the fact that these economic interest groups are also taxpayers who undeniably perceive urban toll as an additional charge. This wariness is coupled with an inquiry about the conditions of usage, the redistribution of the collected tax revenue and the idea of the leviathan state completes the picture. In a nutshell, if researches related to road congestion proliferate, no de facto legitimacy can be pleaded. Nonetheless, the theory on urban tolls engenders economists’ questioning of ways to reduce negative external effects linked to it. Only then does the urban toll appear to bear an answer to these issues. Undeniably, the urban toll suggests inherent conflicts due to the apparent no-payment principal of a public asset as well as to the social perception of the new measure as a mere additional charge. However, when the main concern is effectiveness is its broad sense and the social well-being, the main factors that determine the acceptability of such a tariff measure along with the type of incentives should be the object of a thorough, in-depth analysis. Before adopting this economic role, one has to recognize the factors that intervene in the acceptability of a congestion toll which brought about a copious number of articles and reports that lacked mostly solid theoretical content. It is noticeable that nowadays uncertainties float over the exact nature of the acceptability process. Accepting a congestion tariff could differ from one era to another, from one region to another and from one population to another, etc. Notably, this article, within a convenient time frame, attempts at bringing into focus a link between the social acceptability of the urban congestion toll and the value of time through a survey method barely employed in Tunisia, that of stated preference method. How can the urban toll, as a tax, be defined, justified and made acceptable? How can an equitable and effective tariff of congestion toll be reached? How can the costs of this urban toll be covered? In what way can we make the redistribution of the urban toll revenue visible and economically equitable? How can the redistribution of the revenue of urban toll compensate the disadvantaged while introducing such a tariff measure? This paper will offer answers to these research questions and it follows the line of contribution of JULES DUPUIT in 1844.

Keywords: congestion charge, social perception, acceptability, stated preferences

Procedia PDF Downloads 272