Search results for: time domain analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 40028

Search results for: time domain analysis

36068 Integrating Nursing Informatics to Improve Patient-Centered Care: A Project to Reduce Patient Waiting Time at the Blood Pressure Counter

Authors: Pi-Chi Wu, Tsui-Ping Chu, Hsiu-Hung Wang

Abstract:

Background: The ability to provide immediate medical service in outpatient departments is one of the keys to patient satisfaction. Objectives: This project used electronic equipment to integrate nursing care information to patient care at a blood pressure diagnostic counter. Through process reengineering, the average patient waiting time decreased from 35 minutes to 5 minutes, while service satisfaction increased from a score of 2.7 to 4.6. Methods: Data was collected from a local hospital in Southern Taiwan from a daily average of 2,200 patients in the outpatient department. Previous waiting times were affected by (1) space limitations, (2) the need to help guide patient mobility, (3) the need for nurses to appease irate patients and give instructions, (4), the need for patients to replace lost counter tickets, (5) the need to re-enter information, (6) the replacement of missing patient information. An ad hoc group was established to enhance patient satisfaction and shorten waiting times for patients to see a doctor. A four step strategy consisting of (1) counter relocation, (2) queue reorganization, (3) electronic information integration, (4) process reengineering was implemented. Results: Implementation of the developed strategy decreased patient waiting time from 35 minutes to an average of 5 minutes, and increased patient satisfaction scores from 2.7 to 6.4. Conclusion: Through the integration of information technology and process transformation, waiting times were drastically reduced, patient satisfaction increased, and nurses were allowed more time to engage in more cost-effective services. This strategy was simultaneously enacted in separate hospitals throughout Taiwan.

Keywords: process reengineering, electronic information integration, patient satisfaction, patient waiting time

Procedia PDF Downloads 365
36067 An Innovative Non-Invasive Method To Improve The Stability Of Orthodontic Implants: A Pilot Study

Authors: Dr., Suchita Daokar

Abstract:

Background: Successful orthodontic treatment has always relied on anchorage. The stability of the implants depends on bone quantity, mini-implant design, and placement conditions. Out of the various methods of gaining stability, Platelet concentrations are gaining popularity for various reasons. PRF is a minimally invasive method, and there are various studies that has shown its role in enhancing the stability of general implants. However, there is no literature found regarding the effect of PRF in enhancing the stability of the orthodontic implant. Therefore, this study aimed to evaluate and assess the efficacy of PRF on the stability of the orthodontic implant. Methods: The study comprised of 9 subjects aged above 18 years of age. The split mouth technique was used; Group A (where implants were coated before insertion) and group B (implant were normally inserted). The stability of the implant was measured using resonance frequency analysis at insertion (T0), 24 hours (T1), 2 weeks (T2), at 4 weeks (T3), at 6 weeks (T4), and 8 weeks (T5) after insertion. Result: Statistically significant findings were found when group A was compared to group B using ANOVA test (p<0.05). The stability of the implant of group A at each time interval was greater than group B. The implant stability was high at T0 and reduces at T2, and increasing through T3 to T5. The stability was highest at T5. Conclusion: A chairside, minimally invasive procedure ofPRF coating on implants have shown promising results in improving the stability of orthodontic implants and providing scope for future studies.

Keywords: Orthodontic implants, stablity, resonance Frequency Analysis, pre

Procedia PDF Downloads 190
36066 Paradigms of Assessment, Valuation and Quantification to Trade Ecosystem Services: A Review Focusing on Mangroves and Wetlands

Authors: Rama Seth, Luise Noring, Pratim Majumdar

Abstract:

Based on an extensive literature review, this paper presents distinct approaches to value, quantify and trade ecosystem services, with particular emphasis on services provided by mangroves and wetlands. Building on diverse monetary and market-based systems for the improved allocation of natural resources, such trading and exchange-based methods can help tackle the degradation of ecosystem services in a more targeted and structured manner than achievable with stand-alone policy and administrative regulations. Using various threads of literature, the paper proposes a platform that serves as the skeletal foundation for developing an efficient global market for ecosystem services trading. The paper bridges a significant research and practice gap by recommending how to establish an equilibrium in the biosphere via trading mechanisms while also discovering other research gaps and future research potential in the domain of ecosystem valuation.

Keywords: environment, economics, mangroves, wetlands, markets, ESG, global capital, climate investments, valuation, ecosystem services

Procedia PDF Downloads 230
36065 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

Authors: Nishant Shrivastava, D. K. Sehgal

Abstract:

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Keywords: finite elements, Lagrangian, optimal stress location, serendipity

Procedia PDF Downloads 95
36064 The Obstacles of Applying Electronic Administration at the University of Tabuk from Its Academic Leaders' Perspectives

Authors: Saud Eid Alanazi

Abstract:

The study aimed at recognizing the obstacles of applying of Electronic Administration (e-administration), which refers to any of a number of mechanisms which convert what in a traditional office are paper processes into electronic processes, with the goal being to create a paperless office and improve productivity and performance at the University of Tabuk from its Academic Leaders' Perspectives. The sample of the study consisted of (98) members from deans, vice deans and head of departments from different specialization, gender and position. For achieving the aim of the study, a questionnaire was developed including (45) items distributed into three domains (administrative, human and technical obstacles) . By using appropriate statistical methods to analyze the information, the results indicated that the administrative obstacles domain came in the first rank with a high degree, and the human and technical obstacles came at the second rank with a moderate degree. The study also showed that there were no statistically significant differences attributed to the variables of the members (specialization, gender and position).

Keywords: administration, electronic administration, obstacles, technology, universities

Procedia PDF Downloads 370
36063 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator

Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib

Abstract:

Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.

Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model

Procedia PDF Downloads 296
36062 A Sequential Approach for Random-Effects Meta-Analysis

Authors: Samson Henry Dogo, Allan Clark, Elena Kulinskaya

Abstract:

The objective in meta-analysis is to combine results from several independent studies in order to create generalization and provide evidence based for decision making. But recent studies show that the magnitude of effect size estimates reported in many areas of research finding changed with year publication and this can impair the results and conclusions of meta-analysis. A number of sequential methods have been proposed for monitoring the effect size estimates in meta-analysis. However they are based on statistical theory applicable to fixed effect model (FEM). For random-effects model (REM), the analysis incorporates the heterogeneity variance, tau-squared and its estimation create complications. In this paper proposed the use of Gombay and Serbian (2005) truncated CUSUM-type test with asymptotically valid critical values for sequential monitoring of REM. Simulation results show that the test does not control the Type I error well, and is not recommended. Further work required to derive an appropriate test in this important area of application.

Keywords: meta-analysis, random-effects model, sequential test, temporal changes in effect sizes

Procedia PDF Downloads 452
36061 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle

Authors: Aloke Bapli, Debabrata Seth

Abstract:

Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.

Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation

Procedia PDF Downloads 140
36060 Spectral Analysis of Heart Rate Variability for Normal and Preeclamptic Pregnants

Authors: Abdulnasir Hossen, Alaa Barhoum, Deepali Jaju, V. Gowri, L. Al-Kharusi, M. Hassan, K. Al-Hashmi

Abstract:

Preeclampsia is a pregnancy disorder associated with increase in blood pressure and excess amount of protein in the urine. HRV analysis has been used by many researchers to identify preeclamptic pregnancy from normal pregnancy. A study in this regard to identify preeclamptic pregnancy in Oman from normal pregnant was conducted on 40 subjects (20 patients and 20 normal). The subjects were collected from two hospitals in Oman. A Fast Fourier transform (FFT) spectral analysis has shown that patients with preeclamptic pregnancy have a reduction in the power of the HF band and an increase in the power of the LF band of HRV compared with subjects with normal pregnancy. The accuracy of identification obtained was 80%.

Keywords: preelampsia, pregnancy hypertension, normal pregnant, FFT, spectral analysis, HRV

Procedia PDF Downloads 543
36059 Enhancing Transfer Path Analysis with In-Situ Component Transfer Path Analysis for Interface Forces Identification

Authors: Raef Cherif, Houssine Bakkali, Wafaa El Khatiri, Yacine Yaddaden

Abstract:

The analysis of how vibrations are transmitted between components is required in many engineering applications. Transfer path analysis (TPA) has been a valuable engineering tool for solving Noise, Vibration, and Harshness (NVH problems using sub-structuring applications. The most challenging part of a TPA analysis is estimating the equivalent forces at the contact points between the active and the passive side. Component TPA in situ Method calculates these forces by inverting the frequency response functions (FRFs) measured at the passive subsystem, relating the motion at indicator points to forces at the interface. However, matrix inversion could pose problems due to the ill-conditioning of the matrices leading to inaccurate results. This paper establishes a TPA model for an academic system consisting of two plates linked by four springs. A numerical study has been performed to improve the interface forces identification. Several parameters are studied and discussed, such as the singular value rejection and the number and position of indicator points chosen and used in the inversion matrix.

Keywords: transfer path analysis, matrix inverse method, indicator points, SVD decomposition

Procedia PDF Downloads 73
36058 Virtual Reality Experimental Study on Riding Environment Assessment for Cyclists

Authors: Kaori Nakamura, Shun Su, Yusak Sulio, Daisuke Fukuda

Abstract:

Active modes of transportation, such as walking and cycling, are crucial in promoting healthy and sustainable urban environments. Encouraging the use of these modes requires a well-designed road environment that ensures safety and comfort. Understanding what constitutes a safe environment for these users is essential. While previous research mainly focuses on subjective safety or the likelihood of collisions, there needs to be more analysis of the real-time experiences of travelers and dynamic transitions of their discomfort perceptions. Post-ride surveys or pre-ride impressions, the typical evaluation methods in past studies, may not accurately capture the immediate reactions and discomforts experienced during their rides. Though past experimental studies may also use physiological and behavioral data to evaluate road designs, they evaluated road design by comparing time-average physiological and behavioral data across different designs. This study aims to investigate the effects of the dynamic riding environmental changes experienced by cyclists during their rides on their dynamic physiological and behavioral responses and then explore how these conditions contribute to cyclists' overall subjective safety and comfort. We conducted an experiment with 24 participants who cycled approximately 500 meters in a virtual reality (VR) environment designed to mimic a typical road environment of Japanese local towns where lanes for cyclists and regular cars are adjacent in limited road spaces. Participants experienced six road designs varying in width, separation type, and bike lane color. We measured their physiological data, such as heart rate and skin conductance, and behavioral data, including steering, acceleration, and coordination of bicycles. Questionnaires for eliciting subjective impressions were conducted before and after each ride. The data analysis results indicate that wider paths (i.e., 1.5m and 2m width) are preferred, enhancing perceived safety and reducing stress, as supported by lower heart rates and skin conductance levels over narrower ones (1m width). Designs with clear divisions from car lanes may enhance perceived safety and reduce stress. The analysis of the physiological data also supports these arguments, showing that lower heart rates and skin conductance levels are found in wider, clearly marked paths. Further, the drift-diffusion decision model was performed to reveal whether different road environment designs may impact dynamic decision-making processes and physiological attributes. Designing a 1.5m wide bike lane with clear divisions from car lanes showed the highest level of clarity and safety in decision-making parameters. In contrast, designs without clear separations from car lanes resulted in less favorable decision-making outcomes. These results coincide with previous primary research indicating a preference for bike lane widths more significant than 1.5m. In conclusion, the analysis using the drift-diffusion decision model showed that decision-making ease slightly differs from subjective safety perceptions, providing a comprehensive understanding of how different road designs impact users. This study offers a solid foundation for assessing the perceptions of active mode users and highlights the importance of considering both real-time physiological and subjective data in designing road environments that encourage active transportation modes.

Keywords: active transport modes, cognitive and decision-making modeling, road environment designs, virtual reality experiment

Procedia PDF Downloads 23
36057 The Effect of Acute Rejection and Delayed Graft Function on Renal Transplant Fibrosis in Live Donor Renal Transplantation

Authors: Wisam Ismail, Sarah Hosgood, Michael Nicholson

Abstract:

The research hypothesis is that early post-transplant allograft fibrosis will be linked to donor factors and that acute rejection and/or delayed graft function in the recipient will be independent risk factors for the development of fibrosis. This research hypothesis is to explore whether acute rejection/delay graft function has an effect on the renal transplant fibrosis within the first year post live donor kidney transplant between 1998 and 2009. Methods: The study has been designed to identify five time points of the renal transplant biopsies [0 (pre-transplant), 1 month, 3 months, 6 months and 12 months] for 300 live donor renal transplant patients over 12 years period between March 1997 – August 2009. Paraffin fixed slides were collected from Leicester General Hospital and Leicester Royal Infirmary. These were routinely sectioned at a thickness of 4 Micro millimetres for standardization. Conclusions: Fibrosis at 1 month after the transplant was found significantly associated with baseline fibrosis (p<0.001) and HTN in the transplant recipient (p<0.001). Dialysis after the transplant showed a weak association with fibrosis at 1 month (p=0.07). The negative coefficient for HTN (-0.05) suggests a reduction in fibrosis in the absence of HTN. Fibrosis at 1 month was significantly associated with fibrosis at baseline (p 0.01 and 95%CI 0.11 to 0.67). Fibrosis at 3, 6 or 12 months was not found to be associated with fibrosis at baseline (p=0.70. 0.65 and 0.50 respectively). The amount of fibrosis at 1 month is significantly associated with graft survival (p=0.01 and 95%CI 0.02 to 0.14). Rejection and severity of rejection were not found to be associated with fibrosis at 1 month. The amount of fibrosis at 1 month was significantly associated with graft survival (p=0.02) after adjusting for baseline fibrosis (p=0.01). Both baseline fibrosis and graft survival were significant predictive factors. The amount of fibrosis at 1 month was not found to be significantly associated with rejection (p=0.64) after adjusting for baseline fibrosis (p=0.01). The amount of fibrosis at 1 month was not found to be significantly associated with rejection severity (p=0.29) after adjusting for baseline fibrosis (p=0.04). Fibrosis at baseline and HTN in the recipient were found to be predictive factors of fibrosis at 1 month. (p 0.02, p <0.001 respectively). Age of the donor, their relation to the patient, the pre-op Creatinine, artery, kidney weight and warm time were not found to be significantly associated with fibrosis at 1 month. In this complex model baseline fibrosis, HTN in the recipient and cold time were found to be predictive factors of fibrosis at 1 month (p=0.01,<0.001 and 0.03 respectively). Donor age was found to be a predictive factor of fibrosis at 6 months. The above analysis was repeated for 3, 6 and 12 months. No associations were detected between fibrosis and any of the explanatory variables with the exception of the donor age which was found to be a predictive factor of fibrosis at 6 months.

Keywords: fibrosis, transplant, renal, rejection

Procedia PDF Downloads 222
36056 Analysis of Decentralized on Demand Cross Layer in Cognitive Radio Ad Hoc Network

Authors: A. Sri Janani, K. Immanuel Arokia James

Abstract:

Cognitive radio ad hoc networks different unlicensed users may acquire different available channel sets. This non-uniform spectrum availability imposes special design challenges for broadcasting in CR ad hoc networks. Cognitive radio automatically detects available channels in wireless spectrum. This is a form of dynamic spectrum management. Cross-layer optimization is proposed, using this can allow far away secondary users can also involve into channel work. So it can increase the throughput and it will overcome the collision and time delay.

Keywords: cognitive radio, cross layer optimization, CR mesh network, heterogeneous spectrum, mesh topology, random routing optimization technique

Procedia PDF Downloads 347
36055 Determination of Concentrated State Using Multiple EEG Channels

Authors: Tae Jin Choi, Jong Ok Kim, Sang Min Jin, Gilwon Yoon

Abstract:

Analysis of EEG brainwave provides information on mental or emotional states. One of the particular states that can have various applications in human machine interface (HMI) is concentration. 8-channel EEG signals were measured and analyzed. The concentration index was compared during resting and concentrating periods. Among eight channels, locations the frontal lobe (Fp1 and Fp2) showed a clear increase of the concentration index during concentration regardless of subjects. The rest six channels produced conflicting observations depending on subjects. At this time, it is not clear whether individual difference or how to concentrate made these results for the rest six channels. Nevertheless, it is expected that Fp1 and Fp2 are promising locations for extracting control signal for HMI applications.

Keywords: concentration, EEG, human machine interface, biophysical

Procedia PDF Downloads 469
36054 Implementation of a Lattice Boltzmann Method for Pulsatile Flow with Moment Based Boundary Condition

Authors: Zainab A. Bu Sinnah, David I. Graham

Abstract:

The Lattice Boltzmann Method has been developed and used to simulate both steady and unsteady fluid flow problems such as turbulent flows, multiphase flow and flows in the vascular system. As an example, the study of blood flow and its properties can give a greater understanding of atherosclerosis and the flow parameters which influence this phenomenon. The blood flow in the vascular system is driven by a pulsating pressure gradient which is produced by the heart. As a very simple model of this, we simulate plane channel flow under periodic forcing. This pulsatile flow is essentially the standard Poiseuille flow except that the flow is driven by the periodic forcing term. Moment boundary conditions, where various moments of the particle distribution function are specified, are applied at solid walls. We used a second-order single relaxation time model and investigated grid convergence using two distinct approaches. In the first approach, we fixed both Reynolds and Womersley numbers and varied relaxation time with grid size. In the second approach, we fixed the Womersley number and relaxation time. The expected second-order convergence was obtained for the second approach. For the first approach, however, the numerical method converged, but not necessarily to the appropriate analytical result. An explanation is given for these observations.

Keywords: Lattice Boltzmann method, single relaxation time, pulsatile flow, moment based boundary condition

Procedia PDF Downloads 219
36053 Effects of Process Parameter Variation on the Surface Roughness of Rapid Prototyped Samples Using Design of Experiments

Authors: R. Noorani, K. Peerless, J. Mandrell, A. Lopez, R. Dalberto, M. Alzebaq

Abstract:

Rapid prototyping (RP) is an additive manufacturing technology used in industry that works by systematically depositing layers of working material to construct larger, computer-modeled parts. A key challenge associated with this technology is that RP parts often feature undesirable levels of surface roughness for certain applications. To combat this phenomenon, an experimental technique called Design of Experiments (DOE) can be employed during the growth procedure to statistically analyze which RP growth parameters are most influential to part surface roughness. Utilizing DOE to identify such factors is important because it is a technique that can be used to optimize a manufacturing process, which saves time, money, and increases product quality. In this study, a four-factor/two level DOE experiment was performed to investigate the effect of temperature, layer thickness, infill percentage, and infill speed on the surface roughness of RP prototypes. Samples were grown using the sixteen different possible growth combinations associated with a four-factor/two level study, and then the surface roughness data was gathered for each set of factors. After applying DOE statistical analysis to these data, it was determined that layer thickness played the most significant role in the prototype surface roughness.

Keywords: rapid prototyping, surface roughness, design of experiments, statistical analysis, factors and levels

Procedia PDF Downloads 253
36052 Examining Effects of Electronic Market Functions on Decrease in Product Unit Cost and Response Time to Customer

Authors: Maziyar Nouraee

Abstract:

Electronic markets in recent decades contribute remarkably in business transactions. Many organizations consider traditional ways of trade non-economical and therefore they do trade only through electronic markets. There are different categorizations of electronic markets functions. In one classification, functions of electronic markets are categorized into classes as information, transactions, and value added. In the present paper, effects of the three classes on the two major elements of the supply chain management are measured. The two elements are decrease in the product unit cost and reduction in response time to the customer. The results of the current research show that among nine minor elements related to the three classes of electronic markets functions, six factors and three factors influence on reduction of the product unit cost and reduction of response time to the customer, respectively.

Keywords: electronic commerce, electronic market, B2B trade, supply chain management

Procedia PDF Downloads 379
36051 Techno-Economic Optimization and Evaluation of an Integrated Industrial Scale NMC811 Cathode Active Material Manufacturing Process

Authors: Usama Mohamed, Sam Booth, Aliysn J. Nedoma

Abstract:

As part of the transition to electric vehicles, there has been a recent increase in demand for battery manufacturing. Cathodes typically account for approximately 50% of the total lithium-ion battery cell cost and are a pivotal factor in determining the viability of new industrial infrastructure. Cathodes which offer lower costs whilst maintaining or increasing performance, such as nickel-rich layered cathodes, have a significant competitive advantage when scaling up the manufacturing process. This project evaluates the techno-economic value proposition of an integrated industrial scale cathode active material (CAM) production process, closing the mass and energy balances, and optimizing the operation conditions using a sensitivity analysis. This is done by developing a process model of a co-precipitation synthesis route using Aspen Plus software and validated based on experimental data. The mechanism chemistry and equilibrium conditions were established based on previous literature and HSC-Chemistry software. This is then followed by integrating the energy streams, adding waste recovery and treatment processes, as well as testing the effect of key parameters (temperature, pH, reaction time, etc.) on CAM production yield and emissions. Finally, an economic analysis estimating the fixed and variable costs (including capital expenditure, labor costs, raw materials, etc.) to calculate the cost of CAM ($/kg and $/kWh), total plant cost ($) and net present value (NPV). This work sets the foundational blueprint for future research into sustainable industrial scale processes for CAM manufacturing.

Keywords: cathodes, industrial production, nickel-rich layered cathodes, process modelling, techno-economic analysis

Procedia PDF Downloads 88
36050 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis

Authors: Coriolano Salvini, Ambra Giovannelli

Abstract:

The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.

Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.

Procedia PDF Downloads 212
36049 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan

Authors: Souad Romdhane, Lotfi Belkacem

Abstract:

When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.

Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study

Procedia PDF Downloads 348
36048 The Effects of Interest Rates on Islamic Banks in a Dual Banking System: Empirical Evidence from Saudi Arabia

Authors: Mouldi Djelassi, Jamel Boukhatem

Abstract:

Background: A relation has been established between Islamic banks' activities and interest rates. The aim of this study was to explore the impact of interest rates on the deposits and loans held by Islamic and conventional banks in Saudi Arabia. Methods: A time series data was performed over the period 2008Q1-2020Q2 on eight conventional banks and four Islamic banks. The impacts of interest rate shocks on deposits and loans were identified through panel vector autoregressive models. Results: Impulse response function analysis showed that increasing interest rates reduce loans and conventional deposits. For Islamic banks, deposits are more affected by interest rates than lending. Variance decomposition analysis revealed that deposits contribute to 61% of the Islamic financing variation and only 25% of the conventional loans. Conclusion: Interest rates impacted Islamic banks especially through deposits, which is inconsistent with the theoretical framework. Islamic deposits played an important role in Islamic financing variation and may provide to be a channel for the transmission of the monetary policy in a dual banking system. Monetary policy in Saudi Arabia works in part through “credits” (conventional bank credits) as well as through “money” (conventional and Islamic bank deposits).

Keywords: Islamic banking, interest rates, monetary policy transmission, panel VAR

Procedia PDF Downloads 95
36047 Retrofitting Measures for Existing Housing Stock in Kazakhstan

Authors: S. Yessengabulov, A. Uyzbayeva

Abstract:

Residential buildings fund of Kazakhstan was built in the Soviet time about 35-60 years ago without considering energy efficiency measures. Currently, most of these buildings are in a rundown condition and fail to meet the minimum of hygienic, sanitary and comfortable living requirements. The paper aims to examine the reports of recent building energy survey activities in the country and provide a possible solution for retrofitting existing housing stock built before 1989 which could be applicable for building envelope in cold climate. Methodology also includes two-dimensional modeling of possible practical solutions and further recommendations.

Keywords: energy audit, energy efficient buildings in Kazakhstan, retrofit, two-dimensional conduction heat transfer analysis

Procedia PDF Downloads 231
36046 Exponential Spline Solution for Singularly Perturbed Boundary Value Problems with an Uncertain-But-Bounded Parameter

Authors: Waheed Zahra, Mohamed El-Beltagy, Ashraf El Mhlawy, Reda Elkhadrawy

Abstract:

In this paper, we consider singular perturbation reaction-diffusion boundary value problems, which contain a small uncertain perturbation parameter. To solve these problems, we propose a numerical method which is based on an exponential spline and Shishkin mesh discretization. While interval analysis principle is used to deal with the uncertain parameter, sensitivity analysis has been conducted using different methods. Numerical results are provided to show the applicability and efficiency of our method, which is ε-uniform convergence of almost second order.

Keywords: singular perturbation problem, shishkin mesh, two small parameters, exponential spline, interval analysis, sensitivity analysis

Procedia PDF Downloads 259
36045 Time of Death Determination in Medicolegal Death Investigations

Authors: Michelle Rippy

Abstract:

Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.

Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic

Procedia PDF Downloads 104
36044 Visual Thinking Routines: A Mixed Methods Approach Applied to Student Teachers at the American University in Dubai

Authors: Alain Gholam

Abstract:

Visual thinking routines are principles based on several theories, approaches, and strategies. Such routines promote thinking skills, call for collaboration and sharing of ideas, and above all, make thinking and learning visible. Visual thinking routines were implemented in the teaching methodology graduate course at the American University in Dubai. The study used mixed methods. It was guided by the following two research questions: 1). To what extent do visual thinking inspire learning in the classroom, and make time for students’ questions, contributions, and thinking? 2). How do visual thinking routines inspire learning in the classroom and make time for students’ questions, contributions, and thinking? Eight student teachers enrolled in the teaching methodology course at the American University in Dubai (Spring 2017) participated in the following study. First, they completed a survey that measured to what degree they believed visual thinking routines inspired learning in the classroom and made time for students’ questions, contributions, and thinking. In order to build on the results from the quantitative phase, the student teachers were next involved in a qualitative data collection phase, where they had to answer the question: How do visual thinking routines inspire learning in the classroom and make time for students’ questions, contributions, and thinking? Results revealed that the implementation of visual thinking routines in the classroom strongly inspire learning in the classroom and make time for students’ questions, contributions, and thinking. In addition, student teachers explained how visual thinking routines allow for organization, variety, thinking, and documentation. As with all original, new, and unique resources, visual thinking routines are not free of challenges. To make the most of this useful and valued resource, educators, need to comprehend, model and spread an awareness of the effective ways of using such routines in the classroom. It is crucial that such routines become part of the curriculum to allow for and document students’ questions, contributions, and thinking.

Keywords: classroom display, student engagement, thinking classroom, visual thinking routines

Procedia PDF Downloads 216
36043 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions

Authors: Pirta Palola, Richard Bailey, Lisa Wedding

Abstract:

Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.

Keywords: economics of biodiversity, environmental valuation, natural capital, value function

Procedia PDF Downloads 185
36042 Regional Dynamics of Innovation and Entrepreneurship in the Optics and Photonics Industry

Authors: Mustafa İlhan Akbaş, Özlem Garibay, Ivan Garibay

Abstract:

The economic entities in innovation ecosystems form various industry clusters, in which they compete and cooperate to survive and grow. Within a successful and stable industry cluster, the entities acquire different roles that complement each other in the system. The universities and research centers have been accepted to have a critical role in these systems for the creation and development of innovations. However, the real effect of research institutions on regional economic growth is difficult to assess. In this paper, we present our approach for the identification of the impact of research activities on the regional entrepreneurship for a specific high-tech industry: optics and photonics. The optics and photonics has been defined as an enabling industry, which combines the high-tech photonics technology with the developing optics industry. The recent literature suggests that the growth of optics and photonics firms depends on three important factors: the embedded regional specializations in the labor market, the research and development infrastructure, and a dynamic small firm network capable of absorbing new technologies, products and processes. Therefore, the role of each factor and the dynamics among them must be understood to identify the requirements of the entrepreneurship activities in optics and photonics industry. There are three main contributions of our approach. The recent studies show that the innovation in optics and photonics industry is mostly located around metropolitan areas. There are also studies mentioning the importance of research center locations and universities in the regional development of optics and photonics industry. These studies are mostly limited with the number of patents received within a short period of time or some limited survey results. Therefore the first contribution of our approach is conducting a comprehensive analysis for the state and recent history of the photonics and optics research in the US. For this purpose, both the research centers specialized in optics and photonics and the related research groups in various departments of institutions (e.g. Electrical Engineering, Materials Science) are identified and a geographical study of their locations is presented. The second contribution of the paper is the analysis of regional entrepreneurship activities in optics and photonics in recent years. We use the membership data of the International Society for Optics and Photonics (SPIE) and the regional photonics clusters to identify the optics and photonics companies in the US. Then the profiles and activities of these companies are gathered by extracting and integrating the related data from the National Establishment Time Series (NETS) database, ES-202 database and the data sets from the regional photonics clusters. The number of start-ups, their employee numbers and sales are some examples of the extracted data for the industry. Our third contribution is the utilization of collected data to investigate the impact of research institutions on the regional optics and photonics industry growth and entrepreneurship. In this analysis, the regional and periodical conditions of the overall market are taken into consideration while discovering and quantifying the statistical correlations.

Keywords: entrepreneurship, industrial clusters, optics, photonics, emerging industries, research centers

Procedia PDF Downloads 394
36041 Application of Discrete-Event Simulation in Health Technology Assessment: A Cost-Effectiveness Analysis of Alzheimer’s Disease Treatment Using Real-World Evidence in Thailand

Authors: Khachen Kongpakwattana, Nathorn Chaiyakunapruk

Abstract:

Background: Decision-analytic models for Alzheimer’s disease (AD) have been advanced to discrete-event simulation (DES), in which individual-level modelling of disease progression across continuous severity spectra and incorporation of key parameters such as treatment persistence into the model become feasible. This study aimed to apply the DES to perform a cost-effectiveness analysis of treatment for AD in Thailand. Methods: A dataset of Thai patients with AD, representing unique demographic and clinical characteristics, was bootstrapped to generate a baseline cohort of patients. Each patient was cloned and assigned to donepezil, galantamine, rivastigmine, memantine or no treatment. Throughout the simulation period, the model randomly assigned each patient to discrete events including hospital visits, treatment discontinuation and death. Correlated changes in cognitive and behavioral status over time were developed using patient-level data. Treatment effects were obtained from the most recent network meta-analysis. Treatment persistence, mortality and predictive equations for functional status, costs (Thai baht (THB) in 2017) and quality-adjusted life year (QALY) were derived from country-specific real-world data. The time horizon was 10 years, with a discount rate of 3% per annum. Cost-effectiveness was evaluated based on the willingness-to-pay (WTP) threshold of 160,000 THB/QALY gained (4,994 US$/QALY gained) in Thailand. Results: Under a societal perspective, only was the prescription of donepezil to AD patients with all disease-severity levels found to be cost-effective. Compared to untreated patients, although the patients receiving donepezil incurred a discounted additional costs of 2,161 THB, they experienced a discounted gain in QALY of 0.021, resulting in an incremental cost-effectiveness ratio (ICER) of 138,524 THB/QALY (4,062 US$/QALY). Besides, providing early treatment with donepezil to mild AD patients further reduced the ICER to 61,652 THB/QALY (1,808 US$/QALY). However, the dominance of donepezil appeared to wane when delayed treatment was given to a subgroup of moderate and severe AD patients [ICER: 284,388 THB/QALY (8,340 US$/QALY)]. Introduction of a treatment stopping rule when the Mini-Mental State Exam (MMSE) score goes below 10 to a mild AD cohort did not deteriorate the cost-effectiveness of donepezil at the current treatment persistence level. On the other hand, none of the AD medications was cost-effective when being considered under a healthcare perspective. Conclusions: The DES greatly enhances real-world representativeness of decision-analytic models for AD. Under a societal perspective, treatment with donepezil improves patient’s quality of life and is considered cost-effective when used to treat AD patients with all disease-severity levels in Thailand. The optimal treatment benefits are observed when donepezil is prescribed since the early course of AD. With healthcare budget constraints in Thailand, the implementation of donepezil coverage may be most likely possible when being considered starting with mild AD patients, along with the stopping rule introduced.

Keywords: Alzheimer's disease, cost-effectiveness analysis, discrete event simulation, health technology assessment

Procedia PDF Downloads 112
36040 The Relationships among Learning Emotion, Major Satisfaction, Learning Flow, and Academic Achievement in Medical School Students

Authors: S. J. Yune, S. Y. Lee, S. J. Im, B. S. Kam, S. Y. Baek

Abstract:

This study explored whether academic emotion, major satisfaction, and learning flow are associated with academic achievement in medical school. We know that emotion and affective factors are important factors in students' learning and performance. Emotion has taken the stage in much of contemporary educational psychology literature, no longer relegated to secondary status behind traditionally studied cognitive constructs. Medical school students (n=164) completed academic emotion, major satisfaction, and learning flow online survey. Academic performance was operationalized as students' average grade on two semester exams. For data analysis, correlation analysis, multiple regression analysis, hierarchical multiple regression analyses and ANOVA were conducted. The results largely confirmed the hypothesized relations among academic emotion, major satisfaction, learning flow and academic achievement. Positive academic emotion had a correlation with academic achievement (β=.191). Positive emotion had 8.5% explanatory power for academic achievement. Especially, sense of accomplishment had a significant impact on learning performance (β=.265). On the other hand, negative emotion, major satisfaction, and learning flow did not affect academic performance. Also, there were differences in sense of great (F=5.446, p=.001) and interest (F=2.78, p=.043) among positive emotion, boredom (F=3.55, p=.016), anger (F=4.346, p=.006), and petulance (F=3.779, p=.012) among negative emotion by grade. This study suggested that medical students' positive emotion was an important contributor to their academic achievement. At the same time, it is important to consider that some negative emotions can act to increase one’s motivation. Of particular importance is the notion that instructors can and should create learning environment that foster positive emotion for students. In doing so, instructors improve their chances of positively impacting students’ achievement emotions, as well as their subsequent motivation, learning, and performance. This result had an implication for medical educators striving to understand the personal emotional factors that influence learning and performance in medical training.

Keywords: academic achievement, learning emotion, learning flow, major satisfaction

Procedia PDF Downloads 251
36039 Estimation of Greenhouse Gas (GHG) Reductions from Solar Cell Technology Using Bottom-up Approach and Scenario Analysis in South Korea

Authors: Jaehyung Jung, Kiman Kim, Heesang Eum

Abstract:

Solar cell is one of the main technologies to reduce greenhouse gas (GHG). Thereby, accurate estimation of greenhouse gas reduction by solar cell technology is crucial to consider strategic applications of the solar cell. The bottom-up approach using operating data such as operation time and efficiency is one of the methodologies to improve the accuracy of the estimation. In this study, alternative GHG reductions from solar cell technology were estimated by a bottom-up approach to indirect emission source (scope 2) in Korea, 2015. In addition, the scenario-based analysis was conducted to assess the effect of technological change with respect to efficiency improvement and rate of operation. In order to estimate GHG reductions from solar cell activities in operating condition levels, methodologies were derived from 2006 IPCC guidelines for national greenhouse gas inventories and guidelines for local government greenhouse inventories published in Korea, 2016. Indirect emission factors for electricity were obtained from Korea Power Exchange (KPX) in 2011. As a result, the annual alternative GHG reductions were estimated as 21,504 tonCO2eq, and the annual average value was 1,536 tonCO2eq per each solar cell technology. Those results of estimation showed to be 91% levels versus design of capacity. Estimation of individual greenhouse gases (GHGs) showed that the largest gas was carbon dioxide (CO2), of which up to 99% of the total individual greenhouse gases. The annual average GHG reductions from solar cell per year and unit installed capacity (MW) were estimated as 556 tonCO2eq/yr•MW. Scenario analysis of efficiency improvement by 5%, 10%, 15% increased as much as approximately 30, 61, 91%, respectively, and rate of operation as 100% increased 4% of the annual GHG reductions.

Keywords: bottom-up approach, greenhouse gas (GHG), reduction, scenario, solar cell

Procedia PDF Downloads 209