Search results for: non uniform utility computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2513

Search results for: non uniform utility computing

683 Insight into the Electrocatalytic Activities of Nitrogen-Doped Graphyne and Graphdiyne Families: A First-Principles Study

Authors: Bikram K. Das, Kalyan K. Chattopadhyay

Abstract:

The advent of 2-D materials in the last decade has induced a fresh spur of growth in fuel cell technology as these materials have some highly promising traits that can be exploited to felicitate Oxygen Reduction Reaction (ORR) in an efficient way. Among the various 2-D carbon materials, graphyne (Gy) and graphdiyne (Gdy)1 with their intrinsic non-uniform charge distribution holds promises in this purpose and it is expected2 that substitutional Nitrogen (N) doping could further enhance their efficiency. In this regard, dispersive force corrected density functional theory is used to map the oxygen reduction reaction (ORR) kinetics of five different kinds of N doped graphyne and graphdiyne systems (namely αGy, βGy, γGy, RGy and 6,6,12Gy and Gdy) in alkaline medium. The best doping site for each of the Gy/ Gdy system is determined comparing the formation energies of the possible doping configurations. Similarly, the best di-oxygen (O₂) adsorption sites for the doped systems are identified by comparing the adsorption energies. O₂ adsorption on all N doped Gy/ Gdy systems is found to be energetically favorable. ORR on a catalyst surface may occur either via the Eley-Rideal (ER) or the Langmuir–Hinschelwood (LH) pathway. Systematic studies performed on the considered systems reveal that all of them favor the ER pathway. Further, depending on the nature of di-oxygen adsorption ORR can follow either associative or dissociative mechanism; the possibility of occurrence of both the mechanisms is tested thoroughly for each N doped Gy/ Gdy. For the ORR process, all the Gy/Gdy systems are observed to prefer the efficient four-electron pathway but the expected monotonically exothermic reaction pathway is found only for N doped 6,6,12Gy and RGy following the associative pathway and for N doped βGy, γGy and Gdy following the dissociative pathway. Further computation performed for these systems reveals that for N doped 6,6,12Gy, RGy, βGy, γGy and Gdy the overpotentials are 1.08 V, 0.94 V, 1.17 V, 1.21 V and 1.04 V respectively depicting N doped RGy is the most promising material, to carry out ORR in alkaline medium, among the considered ones. The stability of the ORR intermediate states with the variation of pH and electrode potentials is further explored with Pourbiax diagrams and the activities of these systems in the alkaline medium are compared with the prior reported B/N doped identical systems for ORR in an acidic medium in terms of a common descriptor.

Keywords: graphdiyne, graphyne, nitrogen-doped, ORR

Procedia PDF Downloads 107
682 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 45
681 Studies on Structural and Electrical Properties of Lanthanum Doped Sr₂CoMoO₆₋δ System

Authors: Pravin Kumar, Rajendra K. Singh, Prabhakar Singh

Abstract:

A widespread research work on Mo-based double perovskite systems has been reported as a potential application for electrode materials of solid oxide fuel cells. Mo-based double perovskites studied in form of B-site ordered double perovskite materials, with general formula A₂B′B″O₆ structured by alkaline earth element (A = Sr, Ca, Ba) and heterovalent transition metals (B′ = Fe, Co, Ni, Cr, etc. and B″ = Mo, W, etc.), are raising a significant interest as potential mixed ionic-electronic conductors in the temperature range of 500-800 °C. Such systems reveal higher electrical conductivity, particularly those assigned in form of Sr₂CoMoO₆₋δ (M = Mg, Mn, Fe, Co, Ni, Zn etc.) which were studied in different environments (air/H₂/H₂-Ar/CH₄) at an intermediate temperature. Among them, the Sr₂CoMoO₆₋δ system is a potential candidate as an anode material for solid oxide fuel cells (SOFCs) due to its better electrical conductivity. Therefore, Sr₂CoMoO₆₋δ (SCM) system with La-doped on Sr site has been studied to discover the structural and electrical properties. The double perovskite system Sr₂CoMoO₆₋δ (SCM) and doped system Sr₂-ₓLaₓCoMoO₆₋δ (SLCM, x=0.04) were synthesized by the citrate-nitrate combustion synthesis route. Thermal studies were carried out by thermo-gravimetric analysis. Phase justification was confirmed by powder X-ray diffraction (XRD) as a tetragonal structure with space group I4/m. A minor phase of SrMoO₄ (s.g. I41/a) was identified as a secondary phase using JCPDS card no. 85-0586. Micro-structural investigations revealed the formation of uniform grains. The average grain size of undoped (SCM) and doped (SLCM) compositions was calculated by a linear intercept method and found to be ⁓3.8 μm and 2.7 μm, respectively. The electrical conductivity of SLCM is found higher than SCM in the air within the temperature range of 200-600 °C. SLCM system was also measured in reducing atmosphere (pure H₂) in the temperature range 300-600 °C. SLCM has been showed the higher conductivity in the reducing atmosphere (H₂) than in air and therefore it could be a promising anode material for SOFCs.

Keywords: double perovskite, electrical conductivity, SEM, XRD

Procedia PDF Downloads 114
680 A 3-Dimensional Memory-Based Model for Planning Working Postures Reaching Specific Area with Postural Constraints

Authors: Minho Lee, Donghyun Back, Jaemoon Jung, Woojin Park

Abstract:

The current 3-dimensional (3D) posture prediction models commonly provide only a few optimal postures to achieve a specific objective. The problem with such models is that they are incapable of rapidly providing several optimal posture candidates according to various situations. In order to solve this problem, this paper presents a 3D memory-based posture planning (3D MBPP) model, which is a new digital human model that can analyze the feasible postures in 3D space for reaching tasks that have postural constraints and specific reaching space. The 3D MBPP model can be applied to the types of works that are done with constrained working postures and have specific reaching space. The examples of such works include driving an excavator, driving automobiles, painting buildings, working at an office, pitching/batting, and boxing. For these types of works, a limited amount of space is required to store all of the feasible postures, as the hand reaches boundary can be determined prior to perform the task. This prevents computation time from increasing exponentially, which has been one of the major drawbacks of memory-based posture planning model in 3D space. This paper validates the utility of 3D MBPP model using a practical example of analyzing baseball batting posture. In baseball, batters swing with both feet fixed to the ground. This motion is appropriate for use with the 3D MBPP model since the player must try to hit the ball when the ball is located inside the strike zone (a limited area) in a constrained posture. The results from the analysis showed that the stored and the optimal postures vary depending on the ball’s flying path, the hitting location, the batter’s body size, and the batting objective. These results can be used to establish the optimal postural strategies for achieving the batting objective and performing effective hitting. The 3D MBPP model can also be applied to various domains to determine the optimal postural strategies and improve worker comfort.

Keywords: baseball, memory-based, posture prediction, reaching area, 3D digital human models

Procedia PDF Downloads 202
679 Exploring an Exome Target Capture Method for Cross-Species Population Genetic Studies

Authors: Benjamin A. Ha, Marco Morselli, Xinhui Paige Zhang, Elizabeth A. C. Heath-Heckman, Jonathan B. Puritz, David K. Jacobs

Abstract:

Next-generation sequencing has enhanced the ability to acquire massive amounts of sequence data to address classic population genetic questions for non-model organisms. Targeted approaches allow for cost effective or more precise analyses of relevant sequences; although, many such techniques require a known genome and it can be costly to purchase probes from a company. This is challenging for non-model organisms with no published genome and can be expensive for large population genetic studies. Expressed exome capture sequencing (EecSeq) synthesizes probes in the lab from expressed mRNA, which is used to capture and sequence the coding regions of genomic DNA from a pooled suite of samples. A normalization step produces probes to recover transcripts from a wide range of expression levels. This approach offers low cost recovery of a broad range of genes in the genome. This research project expands on EecSeq to investigate if mRNA from one taxon may be used to capture relevant sequences from a series of increasingly less closely related taxa. For this purpose, we propose to use the endangered Northern Tidewater goby, Eucyclogobius newberryi, a non-model organism that inhabits California coastal lagoons. mRNA will be extracted from E. newberryi to create probes and capture exomes from eight other taxa, including the more at-risk Southern Tidewater goby, E. kristinae, and more divergent species. Captured exomes will be sequenced, analyzed bioinformatically and phylogenetically, then compared to previously generated phylogenies across this group of gobies. This will provide an assessment of the utility of the technique in cross-species studies and for analyzing low genetic variation within species as is the case for E. kristinae. This method has potential applications to provide economical ways to expand population genetic and evolutionary biology studies for non-model organisms.

Keywords: coastal lagoons, endangered species, non-model organism, target capture method

Procedia PDF Downloads 171
678 Modeling of Surge Corona Using Type94 in Overhead Power Lines

Authors: Zahira Anane, Abdelhafid Bayadi

Abstract:

Corona in the HV overhead transmission lines is an important source of attenuation and distortion of overvoltage surges. This phenomenon of distortion, which is superimposed on the distortion by skin effect, is due to the dissipation of energy by injection of space charges around the conductor, this process with place as soon as the instantaneous voltage exceeds the threshold voltage of the corona effect conductors. This paper presents a mathematical model to determine the corona inception voltage, the critical electric field and the corona radius, to predict the capacitive changes at conductor of transmission line due to corona. This model has been incorporated into the Alternative Transients Program version of the Electromagnetic Transients Program (ATP/EMTP) as a user defined component, using the MODELS interface with NORTON TYPE94 of this program and using the foreign subroutine. For obtained the displacement of corona charge hell, dichotomy mathematical method is used for this computation. The present corona model can be used for computing of distortion and attenuation of transient overvoltage waves being propagated in a transmission line of the very high voltage electric power.

Keywords: high voltage, corona, Type94 NORTON, dichotomy, ATP/EMTP, MODELS, distortion, foreign model

Procedia PDF Downloads 604
677 A Computational Analysis of Flow and Acoustics around a Car Wing Mirror

Authors: Aidan J. Bowes, Reaz Hasan

Abstract:

The automotive industry is continually aiming to develop the aerodynamics of car body design. This may be for a variety of beneficial reasons such as to increase speed or fuel efficiency by reducing drag. However recently there has been a greater amount of focus on wind noise produced while driving. Designers in this industry seek a combination of both simplicity of approach and overall effectiveness. This combined with the growing availability of commercial CFD (Computational Fluid Dynamics) packages is likely to lead to an increase in the use of RANS (Reynolds Averaged Navier-Stokes) based CFD methods. This is due to these methods often being simpler than other CFD methods, having a lower demand on time and computing power. In this investigation the effectiveness of turbulent flow and acoustic noise prediction using RANS based methods has been assessed for different wing mirror geometries. Three different RANS based models were used, standard k-ε, realizable k-ε and k-ω SST. The merits and limitations of these methods are then discussed, by comparing with both experimental and numerical results found in literature. In general, flow prediction is fairly comparable to more complex LES (Large Eddy Simulation) based methods; in particular for the k-ω SST model. However acoustic noise prediction still leaves opportunities for more improvement using RANS based methods.

Keywords: acoustics, aerodynamics, RANS models, turbulent flow

Procedia PDF Downloads 427
676 The Impact of Information and Communication Technologies on Teaching Performance at an Iranian University

Authors: Yusef Hedjazi, Saeedeh Nazari Nooghabi

Abstract:

New information and communication technologies (ICT) as one of the main needs of Faculty members in the process of teaching and learning has used in Irans higher education system since 2000.The main purpose of this study is to investigate the role of information and communication technologies (ICT) in teaching performance of Agricultural and Natural Resources Faculties at University of Tehran. The statistical population of the study consisted of all 250 faculties in Agriculture and Natural Resources Colleges and a questionnaire was used to collect data. The reliability of the questionnaire was confirmed by computing of Cronbachs Alpha coefficient at greater than .72. The study showed a significant relationship between agricultural Faculty members teaching performance and competency in using ICT. The results of the regression analysis also explained 51.7% of the variance, teaching performance. The six independent variables that accounted for the explained variance were experience in using educational websites or software, use of educational multimedia (e.g. film and CD, etc), making a presentation using PowerPoint, familiarity with online education websites, using News group to discuss on educational subjects with colleagues and students, and using Electronic communication (messengers) to solve studentsproblems.

Keywords: information and communication technologies, agricultural and natural resources, faculties, teaching performance

Procedia PDF Downloads 310
675 Enhancing Single Channel Minimum Quantity Lubrication through Bypass Controlled Design for Deep Hole Drilling with Small Diameter Tool

Authors: Yongrong Li, Ralf Domroes

Abstract:

Due to significant energy savings, enablement of higher machining speed as well as environmentally friendly features, Minimum Quantity Lubrication (MQL) has been used for many machining processes efficiently. However, in the deep hole drilling field (small tool diameter D < 5 mm) and long tool (length L > 25xD) it is always a bottle neck for a single channel MQL system. The single channel MQL, based on the Venturi principle, faces a lack of enough oil quantity caused by dropped pressure difference during the deep hole drilling process. In this paper, a system concept based on a bypass design has explored its possibility to dynamically reach the required pressure difference between the air inlet and the inside of aerosol generator, so that the deep hole drilling demanded volume of oil can be generated and delivered to tool tips. The system concept has been investigated in static and dynamic laboratory testing. In the static test, the oil volume with and without bypass control were measured. This shows an oil quantity increasing potential up to 1000%. A spray pattern test has demonstrated the differences of aerosol particle size, aerosol distribution and reaction time between single channel and bypass controlled single channel MQL systems. A dynamic trial machining test of deep hole drilling (drill tool D=4.5mm, L= 40xD) has been carried out with the proposed system on a difficult machining material AlSi7Mg. The tool wear along a 100 meter drilling was tracked and analyzed. The result shows that the single channel MQL with a bypass control can overcome the limitation and enhance deep hole drilling with a small tool. The optimized combination of inlet air pressure and bypass control results in a high quality oil delivery to tool tips with a uniform and continuous aerosol flow.

Keywords: deep hole drilling, green production, Minimum Quantity Lubrication (MQL), near dry machining

Procedia PDF Downloads 191
674 The Design of Intelligent Passenger Organization System for Metro Stations Based on Anylogic

Authors: Cheng Zeng, Xia Luo

Abstract:

Passenger organization has always been an essential part of China's metro operation and management. Facing the massive passenger flow, stations need to improve their intelligence and automation degree by an appropriate integrated system. Based on the existing integrated supervisory control system (ISCS) and simulation software (Anylogic), this paper designs an intelligent passenger organization system (IPOS) for metro stations. Its primary function includes passenger information acquisition, data processing and computing, visualization management, decision recommendations, and decision response based on interlocking equipment. For this purpose, the logical structure and intelligent algorithms employed are particularly devised. Besides, the structure diagram of information acquisition and application module, the application of Anylogic, the case library's function process are all given by this research. Based on the secondary development of Anylogic and existing technologies like video recognition, the IPOS is supposed to improve the response speed and address capacity in the face of emergent passenger flow of metro stations.

Keywords: anylogic software, decision-making support system, intellectualization, ISCS, passenger organization

Procedia PDF Downloads 156
673 Automotive Emotions: An Investigation of Their Natures, Frequencies of Occurrence and Causes

Authors: Marlene Weber, Joseph Giacomin, Alessio Malizia, Lee Skrypchuk, Voula Gkatzidou

Abstract:

Technological and sociological developments in the automotive sector are shifting the focus of design towards developing a better understanding of driver needs, desires and emotions. Human centred design methods are being more frequently applied to automotive research, including the use of systems to detect human emotions in real-time. One method for a non-contact measurement of emotion with low intrusiveness is Facial-Expression Analysis (FEA). This paper describes a research study investigating emotional responses of 22 participants in a naturalistic driving environment by applying a multi-method approach. The research explored the possibility to investigate emotional responses and their frequencies during naturalistic driving through real-time FEA. Observational analysis was conducted to assign causes to the collected emotional responses. In total, 730 emotional responses were measured in the collective study time of 440 minutes. Causes were assigned to 92% of the measured emotional responses. This research establishes and validates a methodology for the study of emotions and their causes in the driving environment through which systems and factors causing positive and negative emotional effects can be identified.

Keywords: affective computing, case study, emotion recognition, human computer interaction

Procedia PDF Downloads 183
672 Liaison Psychiatry in Baixo Alentejo, Portugal: Reality and Perspectives

Authors: Mariana Mangas, Yaroslava Martins, M. Suárez, Célia Santos, Ana Matos Pires

Abstract:

Baixo Alentejo is a region of Portugal characterized by an aging population, geographic isolation, social deprivation and a lack of medical staff. It is one of the most problematic regions in regards to mental health, particularly due to the factors mentioned. The aim of this study is a presentation of liaison psychiatry in Hospital José Joaquim Fernandes; a sample of the work done, the current situation and future perspectives. The aim is to present a retrospective study of internal psychiatric emergencies from January 1st, 2016 to August 31st, 2016. Liaison psychiatry of Department of Psychiatry and Mental Health (Psychiatry Service) of ULSBA includes the following activities: internal psychiatry emergencies, HIV consultation (comprised in the general consultation) and liaison psychology (oncology and pain), consisting of a total of 111 internal psychiatry emergencies during the identified period. Gender distribution was uniform. The most prevalent age group was 71-80 years, and 66,6% of patients were 60 years old and over. The majority of the emergency observations was requested by hospital services of medicine (56,8%) and surgery (24,3%). The most frequent reasons for admission were: respiratory disease (18,0%); tumors (15.3%); other surgical and orthopedic pathology (14,5%) and stroke (11,7%). The most frequent psychiatric diagnoses were: neurotic and organic depression (24,3%); delirium (26,1%) and adjustment reaction (14,5%). Major psychiatric pathology (schizophrenia and affective disorders) was found in 10,8%. Antidepressive medication was prescribed in 37,8% patients; antipsychotics in 34,2%. In 9.9% of the cases, no psychotropic drug was prescribed, and 5,4% of patients received psychologic support. Regarding hospital discharge, 42,4% of patients were referred to the general practitioner or to the medical specialist; 22,5% to outpatient gerontopsychiatry; 17,1% to psychiatric outpatient and 14,4% deceased. A future perspective is to start liaison in areas of HIV and psycho oncology in multidisciplinary approach and to improve collaboration with colleagues of other specialties for refining psychiatric referrals.

Keywords: psychiatry, liaison, internal emergency, psychiatric referral

Procedia PDF Downloads 231
671 Diagnostic Contribution of the MMSE-2:EV in the Detection and Monitoring of the Cognitive Impairment: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

The goal of this paper is to present the diagnostic contribution that the screening instrument, Mini-Mental State Examination-2: Expanded Version (MMSE-2:EV), brings in detecting the cognitive impairment or in monitoring the progress of degenerative disorders. The diagnostic signification is underlined by the interpretation of the MMSE-2:EV scores, resulted from the test application to patients with mild and major neurocognitive disorders. The original MMSE is one of the most widely used screening tools for detecting the cognitive impairment, in clinical settings, but also in the field of neurocognitive research. Now, the practitioners and researchers are turning their attention to the MMSE-2. To enhance its clinical utility, the new instrument was enriched and reorganized in three versions (MMSE-2:BV, MMSE-2:SV and MMSE-2:EV), each with two forms: blue and red. The MMSE-2 was adapted and used successfully in Romania since 2013. The cases were selected from current practice, in order to cover vast and significant neurocognitive pathology: mild cognitive impairment, Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s disease, conversion of the mild cognitive impairment into Alzheimer’s disease. The MMSE-2:EV version was used: it was applied one month after the initial assessment, three months after the first reevaluation and then every six months, alternating the blue and red forms. Correlated with age and educational level, the raw scores were converted in T scores and then, with the mean and the standard deviation, the z scores were calculated. The differences of raw scores between the evaluations were analyzed from the point of view of statistic signification, in order to establish the progression in time of the disease. The results indicated that the psycho-diagnostic approach for the evaluation of the cognitive impairment with MMSE-2:EV is safe and the application interval is optimal. The alternation of the forms prevents the learning phenomenon. The diagnostic accuracy and efficient therapeutic conduct derive from the usage of the national test norms. In clinical settings with a large flux of patients, the application of the MMSE-2:EV is a safe and fast psycho-diagnostic solution. The clinicians can draw objective decisions and for the patients: it doesn’t take too much time and energy, it doesn’t bother them and it doesn’t force them to travel frequently.

Keywords: MMSE-2, dementia, cognitive impairment, neuropsychology

Procedia PDF Downloads 496
670 Analyzing the Factors that Cause Parallel Performance Degradation in Parallel Graph-Based Computations Using Graph500

Authors: Mustafa Elfituri, Jonathan Cook

Abstract:

Recently, graph-based computations have become more important in large-scale scientific computing as they can provide a methodology to model many types of relations between independent objects. They are being actively used in fields as varied as biology, social networks, cybersecurity, and computer networks. At the same time, graph problems have some properties such as irregularity and poor locality that make their performance different than regular applications performance. Therefore, parallelizing graph algorithms is a hard and challenging task. Initial evidence is that standard computer architectures do not perform very well on graph algorithms. Little is known exactly what causes this. The Graph500 benchmark is a representative application for parallel graph-based computations, which have highly irregular data access and are driven more by traversing connected data than by computation. In this paper, we present results from analyzing the performance of various example implementations of Graph500, including a shared memory (OpenMP) version, a distributed (MPI) version, and a hybrid version. We measured and analyzed all the factors that affect its performance in order to identify possible changes that would improve its performance. Results are discussed in relation to what factors contribute to performance degradation.

Keywords: graph computation, graph500 benchmark, parallel architectures, parallel programming, workload characterization.

Procedia PDF Downloads 127
669 Modification of Hexagonal Boron Nitride Induced by Focused Laser Beam

Authors: I. Wlasny, Z. Klusek, A. Wysmolek

Abstract:

Hexagonal boron nitride is a representative of a widely popular class of two-dimensional Van Der Waals materials. It finds its uses, among others, in construction of complexly layered heterostructures. Hexagonal boron nitride attracts great interest because of its properties characteristic for wide-gap semiconductors as well as an ultra-flat surface.Van Der Waals heterostructures composed of two-dimensional layered materials, such as transition metal dichalcogenides or graphene give hope for miniaturization of various electronic and optoelectronic elements. In our presentation, we will show the results of our investigations of the not previously reported modification of the hexagonal boron nitride layers with focused laser beam. The electrostatic force microscopy (EFM) images reveal that the irradiation leads to changes of the local electric fields for a wide range of laser wavelengths (from 442 to 785 nm). These changes are also accompanied by alterations of crystallographic structure of the material, as reflected by Raman spectra. They exhibit high stability and remain visible after at least five months. This behavior can be explained in terms of photoionization of the defect centers in h-BN which influence non-uniform electrostatic field screening by the photo-excited charge carriers. Analyzed changes influence local defect structure, and thus the interatomic distances within the lattice. These effects can be amplified by the piezoelectric character of hexagonal boron nitride, similar to that found in nitrides (e.g., GaN, AlN). Our results shed new light on the optical properties of the hexagonal boron nitride, in particular, those associated with electron-phonon coupling. Our study also opens new possibilities for h-BN applications in layered heterostructures where electrostatic fields can be used in tailoring of the local properties of the structures for use in micro- and nanoelectronics or field-controlled memory storage. This work is supported by National Science Centre project granted on the basis of the decision number DEC-2015/16/S/ST3/00451.

Keywords: atomic force microscopy, hexagonal boron nitride, optical properties, raman spectroscopy

Procedia PDF Downloads 154
668 Synthesis of TiO₂/Graphene Nanocomposites with Excellent Visible-Light Photocatalytic Activity Based on Chemical Exfoliation Method

Authors: Nhan N. T. Ton, Anh T. N. Dao, Kouichirou Katou, Toshiaki Taniike

Abstract:

Facile electron-hole recombination and the broad band gap are two major drawbacks of titanium dioxide (TiO₂) when applied in visible-light photocatalysis. Hybridization of TiO₂ with graphene is a promising strategy to lessen these pitfalls. Recently, there have been many reports on the synthesis of TiO₂/graphene nanocomposites, in most of which graphene oxide (GO) was used as a starting material. However, the reduction of GO introduced a large number of defects on the graphene framework. In addition, the sensitivity of titanium alkoxide to water (GO usually contains) significantly obstructs the uniform and controlled growth of TiO₂ on graphene. Here, we demonstrate a novel technique to synthesize TiO₂/graphene nanocomposites without the use of GO. Graphene dispersion was obtained through the chemical exfoliation of graphite in titanium tetra-n-butoxide with the aid of ultrasonication. The dispersion was directly used for the sol-gel reaction in the presence of different catalysts. A TiO₂/reduced graphene oxide (TiO₂/rGO) nanocomposite, which was prepared by a solvothermal method from GO, and the commercial TiO₂-P25 were used as references. It was found that titanium alkoxide afforded the graphene dispersion of a high quality in terms of a trace amount of defects and a few layers of dispersed graphene. Moreover, the sol-gel reaction from this dispersion led to TiO₂/graphene nanocomposites featured with promising characteristics for visible-light photocatalysts including: (I) the formation of a TiO₂ nano layer (thickness ranging from 1 nm to 5 nm) that uniformly and thinly covered graphene sheets, (II) a trace amount of defects on the graphene framework (low ID/IG ratio: 0.21), (III) a significant extension of the absorption edge into the visible light region (a remarkable extension of the absorption edge to 578 nm beside the usual edge at 360 nm), and (IV) a dramatic suppression of electron-hole recombination (the lowest photoluminescence intensity compared to reference samples). These advantages were successfully demonstrated in the photocatalytic decomposition of methylene blue under visible light irradiation. The TiO₂/graphene nanocomposites exhibited 15 and 5 times higher activity than TiO₂-P25 and the TiO₂/rGO nanocomposite, respectively.

Keywords: chemical exfoliation, photocatalyst, TiO₂/graphene, sol-gel reaction

Procedia PDF Downloads 142
667 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder

Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo

Abstract:

Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.

Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion

Procedia PDF Downloads 103
666 An Analysis of Uncoupled Designs in Chicken Egg

Authors: Pratap Sriram Sundar, Chandan Chowdhury, Sagar Kamarthi

Abstract:

Nature has perfected her designs over 3.5 billion years of evolution. Research fields such as biomimicry, biomimetics, bionics, bio-inspired computing, and nature-inspired designs have explored nature-made artifacts and systems to understand nature’s mechanisms and intelligence. Learning from nature, the researchers have generated sustainable designs and innovation in a variety of fields such as energy, architecture, agriculture, transportation, communication, and medicine. Axiomatic design offers a method to judge if a design is good. This paper analyzes design aspects of one of the nature’s amazing object: chicken egg. The functional requirements (FRs) of components of the object are tabulated and mapped on to nature-chosen design parameters (DPs). The ‘independence axiom’ of the axiomatic design methodology is applied to analyze couplings and to evaluate if eggs’ design is good (i.e., uncoupled design) or bad (i.e., coupled design). The analysis revealed that eggs design is a good design, i.e., uncoupled design. This approach can be applied to any nature’s artifacts to judge whether their design is a good or a bad. This methodology is valuable for biomimicry studies. This approach can also be a very useful teaching design consideration of biology and bio-inspired innovation.

Keywords: uncoupled design, axiomatic design, nature design, design evaluation

Procedia PDF Downloads 155
665 Spontaneous and Posed Smile Detection: Deep Learning, Traditional Machine Learning, and Human Performance

Authors: Liang Wang, Beste F. Yuksel, David Guy Brizan

Abstract:

A computational model of affect that can distinguish between spontaneous and posed smiles with no errors on a large, popular data set using deep learning techniques is presented in this paper. A Long Short-Term Memory (LSTM) classifier, a type of Recurrent Neural Network, is utilized and compared to human classification. Results showed that while human classification (mean of 0.7133) was above chance, the LSTM model was more accurate than human classification and other comparable state-of-the-art systems. Additionally, a high accuracy rate was maintained with small amounts of training videos (70 instances). The derivation of important features to further understand the success of our computational model were analyzed, and it was inferred that thousands of pairs of points within the eyes and mouth are important throughout all time segments in a smile. This suggests that distinguishing between a posed and spontaneous smile is a complex task, one which may account for the difficulty and lower accuracy of human classification compared to machine learning models.

Keywords: affective computing, affect detection, computer vision, deep learning, human-computer interaction, machine learning, posed smile detection, spontaneous smile detection

Procedia PDF Downloads 109
664 Optimized Scheduling of Domestic Load Based on User Defined Constraints in a Real-Time Tariff Scenario

Authors: Madia Safdar, G. Amjad Hussain, Mashhood Ahmad

Abstract:

One of the major challenges of today’s era is peak demand which causes stress on the transmission lines and also raises the cost of energy generation and ultimately higher electricity bills to the end users, and it was used to be managed by the supply side management. However, nowadays this has been withdrawn because of existence of potential in the demand side management (DSM) having its economic and- environmental advantages. DSM in domestic load can play a vital role in reducing the peak load demand on the network provides a significant cost saving. In this paper the potential of demand response (DR) in reducing the peak load demands and electricity bills to the electric users is elaborated. For this purpose the domestic appliances are modeled in MATLAB Simulink and controlled by a module called energy management controller. The devices are categorized into controllable and uncontrollable loads and are operated according to real-time tariff pricing pattern instead of fixed time pricing or variable pricing. Energy management controller decides the switching instants of the controllable appliances based on the results from optimization algorithms. In GAMS software, the MILP (mixed integer linear programming) algorithm is used for optimization. In different cases, different constraints are used for optimization, considering the comforts, needs and priorities of the end users. Results are compared and the savings in electricity bills are discussed in this paper considering real time pricing and fixed tariff pricing, which exhibits the existence of potential to reduce electricity bills and peak loads in demand side management. It is seen that using real time pricing tariff instead of fixed tariff pricing helps to save in the electricity bills. Moreover the simulation results of the proposed energy management system show that the gained power savings lie in high range. It is anticipated that the result of this research will prove to be highly effective to the utility companies as well as in the improvement of domestic DR.

Keywords: controllable and uncontrollable domestic loads, demand response, demand side management, optimization, MILP (mixed integer linear programming)

Procedia PDF Downloads 288
663 Estimation of Implicit Colebrook White Equation by Preferable Explicit Approximations in the Practical Turbulent Pipe Flow

Authors: Itissam Abuiziah

Abstract:

In several hydraulic systems, it is necessary to calculate the head losses which depend on the resistance flow friction factor in Darcy equation. Computing the resistance friction is based on implicit Colebrook-White equation which is considered as the standard for the friction calculation, but it needs high computational cost, therefore; several explicit approximation methods are used for solving an implicit equation to overcome this issue. It follows that the relative error is used to determine the most accurate method among the approximated used ones. Steel, cast iron and polyethylene pipe materials investigated with practical diameters ranged from 0.1m to 2.5m and velocities between 0.6m/s to 3m/s. In short, the results obtained show that the suitable method for some cases may not be accurate for other cases. For example, when using steel pipe materials, Zigrang and Silvester's method has revealed as the most precise in terms of low velocities 0.6 m/s to 1.3m/s. Comparatively, Halland method showed a less relative error with the gradual increase in velocity. Accordingly, the simulation results of this study might be employed by the hydraulic engineers, so they can take advantage to decide which is the most applicable method according to their practical pipe system expectations.

Keywords: Colebrook–White, explicit equation, friction factor, hydraulic resistance, implicit equation, Reynolds numbers

Procedia PDF Downloads 167
662 Hydroinformatics of Smart Cities: Real-Time Water Quality Prediction Model Using a Hybrid Approach

Authors: Elisa Coraggio, Dawei Han, Weiru Liu, Theo Tryfonas

Abstract:

Water is one of the most important resources for human society. The world is currently undergoing a wave of urban growth, and pollution problems are of a great impact. Monitoring water quality is a key task for the future of the environment and human species. In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for environmental monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the artificial intelligence algorithm. This study derives the methodology and demonstrates its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for the environment monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a new methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the Artificial Intelligence algorithm. This study derives the methodology and demonstrate its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.

Keywords: artificial intelligence, hydroinformatics, numerical modelling, smart cities, water quality

Procedia PDF Downloads 163
661 Prediction of the Dark Matter Distribution and Fraction in Individual Galaxies Based Solely on Their Rotation Curves

Authors: Ramzi Suleiman

Abstract:

Recently, the author proposed an observationally-based relativity theory termed information relativity theory (IRT). The theory is simple and is based only on basic principles, with no prior axioms and no free parameters. For the case of a body of mass in uniform rectilinear motion relative to an observer, the theory transformations uncovered a matter-dark matter duality, which prescribes that the sum of the densities of the body's baryonic matter and dark matter, as measured by the observer, is equal to the body's matter density at rest. It was shown that the theory transformations were successful in predicting several important phenomena in small particle physics, quantum physics, and cosmology. This paper extends the theory transformations to the cases of rotating disks and spheres. The resulting transformations for a rotating disk are utilized to derive predictions of the radial distributions of matter and dark matter densities in rotationally supported galaxies based solely on their observed rotation curves. It is also shown that for galaxies with flattening curves, good approximations of the radial distributions of matter and dark matter and of the dark matter fraction could be obtained from one measurable scale radius. Test of the model on five galaxies, chosen randomly from the SPARC database, yielded impressive predictions. The rotation curves of all the investigated galaxies emerged as accurate traces of the predicted radial density distributions of their dark matter. This striking result raises an intriguing physical explanation of gravity in galaxies, according to which it is the proximal drag of the stars and gas in the galaxy by its rotating dark matter web. We conclude by alluding briefly to the application of the proposed model to stellar systems and black holes. This study also hints at the potential of the discovered matter-dark matter duality in fixing the standard model of elementary particles in a natural manner without the need for hypothesizing about supersymmetric particles.

Keywords: dark matter, galaxies rotation curves, SPARC, rotating disk

Procedia PDF Downloads 57
660 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 91
659 Advanced Electron Microscopy Study of Fission Products in a TRISO Coated Particle Neutron Irradiated to 3.6 X 1021 N/cm² Fast Fluence at 1040 ⁰C

Authors: Haiming Wen, Isabella J. Van Rooyen

Abstract:

Tristructural isotropic (TRISO)-coated fuel particles are designed as nuclear fuel for high-temperature gas reactors. TRISO coating consists of layers of carbon buffer, inner pyrolytic carbon (IPyC), SiC, and outer pyrolytic carbon. The TRISO coating, especially the SiC layer, acts as a containment system for fission products produced in the kernel. However, release of certain metallic fission products across intact TRISO coatings has been observed for decades. Despite numerous studies, mechanisms by which fission products migrate across the coating layers remain poorly understood. In this study, scanning transmission electron microscopy (STEM), energy dispersive X-ray spectroscopy (EDS), high-resolution transmission electron microscopy (HRTEM) and electron energy loss spectroscopy (EELS) were used to examine the distribution, composition and structure of fission products in a TRISO coated particle neutron irradiated to 3.6 x 1021 n/cm² fast fluence at 1040 ⁰C. Precession electron diffraction was used to investigate characters of grain boundaries where specific fission product precipitates are located. The retention fraction of 110mAg in the investigated TRISO particle was estimated to be 0.19. A high density of nanoscale fission product precipitates was observed in the SiC layer close to the SiC-IPyC interface, most of which are rich in Pd, while Ag was not identified. Some Pd-rich precipitates contain U. Precipitates tend to have complex structure and composition. Although a precipitate appears to have uniform contrast in STEM, EDS indicated that there may be composition variations throughout the precipitate, and HRTEM suggested that the precipitate may have several parts different in crystal structure or orientation. Attempts were made to measure charge states of precipitates using EELS and study their possible effect on precipitate transport.

Keywords: TRISO particle, fission product, nuclear fuel, electron microscopy, neutron irradiation

Procedia PDF Downloads 243
658 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City

Authors: Sultan Ahmad Azizi, Gaurang J. Joshi

Abstract:

Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.

Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport

Procedia PDF Downloads 244
657 Effect of Gravity on the Controlled Cooling of a Steel Block by Impinging Water Jets

Authors: E.K.K. Agyeman, P. Mousseau, A. Sarda, D. Edelin

Abstract:

The uniform and controlled cooling of hot metals by the circulation of water in canals remains a challenge due to the phase change of the water and the high heat fluxes associated with the phase change. This is because, during the cooling process, the phases are not uniformly distributed along the canals with the liquid phase dominating at the entrances of the canals and the gaseous phase dominating towards the exits. The difference in thermal properties between both phases leads to a heterogeneous temperature distribution in the part being cooled. Slowing down the cooling process is also a challenge due to the high heat fluxes associated with the phase change of water. This study investigates the use of multiple water jets for the controlled and homogenous cooling of hot metal parts and the effect of gravity on the effectiveness of the cooling process with a potential application in the cooling of composite forming moulds. A hole is bored at the centre of a steel block along its length. The jets are generated from the holes of a perforated steel pipe which is placed along the centre of the hole bored in the steel block. The evolution of the temperature with respect to time on the external surface of the steel block is measured simultaneously by thermocouples and an infrared camera. Different jet positions are tested in order to identify the jet placement configuration that ensures the most homogenous cooling of the block while the cooling speed is controlled by an intermittent impingement of the jets. In order to study the effect of gravity on the cooling process, a scenario where the jets are oriented in the opposite direction to that of gravity is compared to one where the jets are aligned in the same direction as gravity. It’s observed that orienting the jets in the direction of gravity reduces the effectiveness of the cooling process on the face of the block facing the impinging jets. This is due to the formation of a deeper pool of water due to the effect gravity and of the curved surface of the canal. This deeper pool of water influences the boiling regime characterized by a slower bubble evacuation when compared to the scenario where the jets are opposed to gravity.

Keywords: cooling speed, gravity, homogenous cooling, jet impingement

Procedia PDF Downloads 109
656 Protein-Enrichment of Oilseed Meals by Triboelectrostatic Separation

Authors: Javier Perez-Vaquero, Katryn Junker, Volker Lammers, Petra Foerst

Abstract:

There is increasing importance to accelerate the transition to sustainable food systems by including environmentally friendly technologies. Our work focuses on protein enrichment and fractionation of agricultural side streams by dry triboelectrostatic separation technology. Materials are fed in particulate form into a system dispersed in a highly turbulent gas stream, whereby the high collision rate of particles against surfaces and other particles greatly enhances the electrostatic charge build-up over the particle surface. A subsequent step takes the charged particles to a delimited zone in the system where there is a highly uniform, intense electric field applied. Because the charge polarity acquired by a particle is influenced by its chemical composition, morphology, and structure, the protein-rich and fiber-rich particles of the starting material get opposite charge polarities, thus following different paths as they move through the region where the electric field is present. The output is two material fractions, which differ in their respective protein content. One is a fiber-rich, low-protein fraction, while the other is a high-protein, low-fiber composition. Prior to testing, materials undergo a milling process, and some samples are stored under controlled humidity conditions. In this way, the influence of both particle size and humidity content was established. We used two oilseed meals: lupine and rapeseed. In addition to a lab-scale separator to perform the experiments, the triboelectric separation process could be successfully scaled up to a mid-scale belt separator, increasing the mass feed from g/sec to kg/hour. The triboelectrostatic separation technology opens a huge potential for the exploitation of so far underutilized alternative protein sources. Agricultural side-streams from cereal and oil production, which are generated in high volumes by the industries, can further be valorized by this process.

Keywords: bench-scale processing, dry separation, protein-enrichment, triboelectrostatic separation

Procedia PDF Downloads 172
655 Studies on Organic and Inorganic Micro/Nano Particle Reinforced Epoxy Composites

Authors: Daniel Karthik, Vijay Baheti, Jiri Militky, Sundaramurthy Palanisamy

Abstract:

Fibre based nano particles are presently considered as one of the potential filler materials for the improvement of mechanical and physical properties of polymer composites. Due to high matrix-filler interfacial area there will be uniform and homogeneous dispersion of nanoparticles. In micro/nano filler reinforced composites, resin material is usually tailored by organic or inorganic nanoparticles to have improved matrix properties. The objective of this study was to compare the potential of reinforcement of different organic and inorganic micro/nano fillers in epoxy composites. Industrial and agricultural waste of fibres like Agave Americana, cornhusk, jute, basalt, carbon, glass and fly ash was utilized to prepare micro/nano particles. Micro/nano particles were obtained using high energy planetary ball milling process in dry condition. Milling time and ball size were kept constant throughout the ball milling process. Composites were fabricated by hand lay method. Particle loading was kept constant to 3% wt. for all composites. In present study, loading of fillers was selected as 3 wt. % for all composites. Dynamic mechanical properties of the nanocomposite films were performed in three-point bending mode with gauge length and sample width of 50 mm and 10 mm respectively. The samples were subjected to an oscillating frequency of 1 Hz, 5 Hz and 10 Hz and 100 % oscillating amplitude in the temperature ranges of 30°C to 150°C at the heating rate of 3°C/min. Damping was found to be higher with the jute composites. Amongst organic fillers lowest damping factor was observed with Agave Americana particles, this means that Agave americana fibre particles have betters interface adhesion with epoxy resin. Basalt, fly ash and glass particles have almost similar damping factors confirming better interface adhesion with epoxy.

Keywords: ball milling, damping factor, matrix-filler interface, particle reinforcements

Procedia PDF Downloads 255
654 The Optimal Order Policy for the Newsvendor Model under Worker Learning

Authors: Sunantha Teyarachakul

Abstract:

We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.

Keywords: inventory management, Newsvendor model, order policy, worker learning

Procedia PDF Downloads 398