Search results for: input constraints
768 The Supply Chain Operation Reference Model Adaptation in the Developing Countries: An Empirical Study on the Egyptian Automotive Sector
Authors: Alaa Osman, Sara Elgazzar, Breksal Elmiligy
Abstract:
The Supply Chain Operation Reference (SCOR) model is considered one of the most widely implemented supply chain performance measurement systems (SCPMSs). Several studies have been proposed on the SCOR model adaptation in developed countries context; while there is a limited availability of previous work on the SCPMSs application generally and the SCOR model specifically in developing nations. This paper presents a research agenda on the SCOR model adaptation in the developing countries. It aims at investigating the challenges of adapting the SCOR model to manage and measure supply chain performance in developing countries. The research will exemplify the system in the Egyptian automotive sector to gain a comprehensive understanding of how the application of the SCOR model can affect the performance of automotive companies in Egypt, with a necessary understanding of challenges and obstacles faced the adaptation of the model in the Egyptian supply chain context. An empirical study was conducted on the Egyptian automotive sector in three companies considering three different classes: BMW, Hyundai and Brilliance. First, in-depth interviews were carried out to gain an insight into the implementation and the relevance of the concepts of supply chain management and performance measurement in the Egyptian automotive industry. Then, a formal survey was designed based on the SCOR model five main processes (plan, source, make, deliver and return) and best practices to investigate the challenges and obstacles faced the adaptation of the SCOR model in the Egyptian automotive supply chain. Finally, based on the survey results, the appropriate best practices for each process were identified in order to overcome the SCOR model adaptation challenges. The results showed that the implementation of the SCOR model faced different challenges and unavailability of the required enablers. The survey highlighted the low integration of end-to-end supply chain, lacks commitment for the innovative ideas and technologies, financial constraints and lack of practical training and support as the main challenges faced the adaptation of the SCOR model in the Egyptian automotive supply chain. The research provides an original contribution to knowledge by proposing a procedure to identify challenges encountered during the process of SCOR model adoption which can pave a way for further research in the area of SCPMSs adaptation, particularly in the developing countries. The research can help managers and organizations to identify obstacles and difficulties of the SCOR model adaptation, subsequently this can facilitate measuring the improved performance or changes in the organizational performance.Keywords: automotive sector, developing countries, SCOR model, supply chain performance
Procedia PDF Downloads 374767 Optimal Capacitors Placement and Sizing Improvement Based on Voltage Reduction for Energy Efficiency
Authors: Zilaila Zakaria, Muhd Azri Abdul Razak, Muhammad Murtadha Othman, Mohd Ainor Yahya, Ismail Musirin, Mat Nasir Kari, Mohd Fazli Osman, Mohd Zaini Hassan, Baihaki Azraee
Abstract:
Energy efficiency can be realized by minimizing the power loss with a sufficient amount of energy used in an electrical distribution system. In this report, a detailed analysis of the energy efficiency of an electric distribution system was carried out with an implementation of the optimal capacitor placement and sizing (OCPS). The particle swarm optimization (PSO) will be used to determine optimal location and sizing for the capacitors whereas energy consumption and power losses minimization will improve the energy efficiency. In addition, a certain number of busbars or locations are identified in advance before the PSO is performed to solve OCPS. In this case study, three techniques are performed for the pre-selection of busbar or locations which are the power-loss-index (PLI). The particle swarm optimization (PSO) is designed to provide a new population with improved sizing and location of capacitors. The total cost of power losses, energy consumption and capacitor installation are the components considered in the objective and fitness functions of the proposed optimization technique. Voltage magnitude limit, total harmonic distortion (THD) limit, power factor limit and capacitor size limit are the parameters considered as the constraints for the proposed of optimization technique. In this research, the proposed methodologies implemented in the MATLAB® software will transfer the information, execute the three-phase unbalanced load flow solution and retrieve then collect the results or data from the three-phase unbalanced electrical distribution systems modeled in the SIMULINK® software. Effectiveness of the proposed methods used to improve the energy efficiency has been verified through several case studies and the results are obtained from the test systems of IEEE 13-bus unbalanced electrical distribution system and also the practical electrical distribution system model of Sultan Salahuddin Abdul Aziz Shah (SSAAS) government building in Shah Alam, Selangor.Keywords: particle swarm optimization, pre-determine of capacitor locations, optimal capacitors placement and sizing, unbalanced electrical distribution system
Procedia PDF Downloads 434766 Graphic User Interface Design Principles for Designing Augmented Reality Applications
Authors: Afshan Ejaz, Syed Asim Ali
Abstract:
The reality is a combination of perception, reconstruction, and interaction. Augmented Reality is the advancement that layer over consistent everyday existence which includes content based interface, voice-based interfaces, voice-based interface and guide based or gesture-based interfaces, so designing augmented reality application interfaces is a difficult task for the maker. Designing a user interface which is not only easy to use and easy to learn but its more interactive and self-explanatory which have high perceived affordability, perceived usefulness, consistency and high discoverability so that the user could easily recognized and understand the design. For this purpose, a lot of interface design principles such as learnability, Affordance, Simplicity, Memorability, Feedback, Visibility, Flexibly and others are introduced but there no such principles which explain the most appropriate interface design principles for designing an Augmented Reality application interfaces. Therefore, the basic goal of introducing design principles for Augmented Reality application interfaces is to match the user efforts and the computer display (‘plot user input onto computer output’) using an appropriate interface action symbol (‘metaphors’) or to make that application easy to use, easy to understand and easy to discover. In this study by observing Augmented reality system and interfaces, few of well-known design principle related to GUI (‘user-centered design’) are identify and through them, few issues are shown which can be determined through the design principles. With the help of multiple studies, our study suggests different interface design principles which makes designing Augmented Reality application interface more easier and more helpful for the maker as these principles make the interface more interactive, learnable and more usable. To accomplish and test our finding, Pokémon Go an Augmented Reality game was selected and all the suggested principles are implement and test on its interface. From the results, our study concludes that our identified principles are most important principles while developing and testing any Augmented Reality application interface.Keywords: GUI, augmented reality, metaphors, affordance, perception, satisfaction, cognitive burden
Procedia PDF Downloads 169765 Ethical Considerations in In-Utero Gene Editing
Authors: Shruti Govindarajan
Abstract:
In-utero gene editing with CRISPR-Cas9 opens up new possibilities for treating genetic disorders during pregnancy while still in mother’s womb. By targeting genetic mutations in the early stages of fetal development, this approach could potentially prevent severe conditions—like cystic fibrosis, sickle cell anemia, and muscular dystrophy—from causing harm. CRISPR-Cas9, which allows precise DNA edits, could be delivered into fetal cells through vectors such as adeno-associated viruses (AAVs) or nanoparticles, correcting disease-causing mutations and possibly offering lifelong relief from these disorders. For families facing severe genetic diagnoses, in-utero gene editing could provide a transformative option. However, technical challenges remain, including ensuring that gene editing only targets the intended cells and verifying long-term safety. Ethical considerations are also at the forefront of this technology. The editing of a fetus's genes brings up difficult questions about consent, especially since these genetic changes will affect the child’s entire life without their input. There's also concern over possible unintended side effects, or changes passed down to future generations. Moreover, if used beyond therapeutic purposes, this technology could be misused for ‘enhancements,’ like selecting for certain physical or cognitive traits, raising concerns about inequality and social pressures. In this way, in-utero gene editing brings both exciting potential and complex moral questions. As research progresses, addressing these scientific and ethical concerns will be key to ensuring that this technology is used responsibly, prioritizing safety, fairness, and a focus on alleviating genetic disease. A cautious and inclusive approach, along with clear regulations, will be essential to realizing the benefits of in-utero gene editing while protecting against unintended consequences.Keywords: in-utero gene editing, CRISPR, bioethics, genetic disorder
Procedia PDF Downloads 8764 Analyzing Growth Trends of the Built Area in the Precincts of Various Types of Tourist Attractions in India: 2D and 3D Analysis
Authors: Yarra Sulina, Nunna Tagore Sai Priya, Ankhi Banerjee
Abstract:
With the rapid growth in tourist arrivals, there has been a huge demand for the growth of infrastructure in the destinations. With the increasing preference of tourists to stay near attractions, there has been a considerable change in the land use around tourist sites. However, with the inclusion of certain regulations and guidelines provided by the authorities based on the nature of tourism activity and geographical constraints, the pattern of growth of built form is different for various tourist sites. Therefore, this study explores the patterns of growth of built-up for a decade from 2009 to 2019 through two-dimensional and three-dimensional analysis. Land use maps are created through supervised classification of satellite images obtained from LANDSAT 4-5 and LANDSAT 8 for 2009 and 2019, respectively. The overall expansion of the built-up area in the region is analyzed in relation to the distance from the city's geographical center and the tourism-related growth regions are identified which are influenced by the proximity of tourist attractions. The primary tourist sites of various destinations with different geographical characteristics and tourism activities, that have undergone a significant increase in built-up area and are occupied with tourism-related infrastructure are selected for further study. Proximity analysis of the tourism-related growth sites is carried out to delineate the influence zone of the tourist site in a destination. Further, a temporal analysis of volumetric growth of built form is carried out to understand the morphology of the tourist precincts over time. The Digital Surface Model (DSM) and Digital Terrain Model (DTM) are used to extract the building footprints along with building height. Factors such as building height, and building density are evaluated to understand the patterns of three-dimensional growth of the built area in the region. The study also explores the underlying reasons for such changes in built form around various tourist sites and predicts the impact of such growth patterns in the region. The building height and building density around tourist site creates a huge impact on the appeal of the destination. The surroundings that are incompatible with the theme of the tourist site have a negative impact on the attractiveness of the destination that leads to negative feedback by the tourists, which is not a sustainable form of development. Therefore, proper spatial measures are necessary in terms of area and volume of the built environment for a healthy and sustainable environment around the tourist sites in the destination.Keywords: sustainable tourism, growth patterns, land-use changes, 3-dimensional analysis of built-up area
Procedia PDF Downloads 78763 Some Codes for Variants in Graphs
Authors: Sofia Ait Bouazza
Abstract:
We consider the problem of finding a minimum identifying code in a graph. This problem was initially introduced in 1998 and has been since fundamentally connected to a wide range of applications (fault diagnosis, location detection …). Suppose we have a building into which we need to place fire alarms. Suppose each alarm is designed so that it can detect any fire that starts either in the room in which it is located or in any room that shares a doorway with the room. We want to detect any fire that may occur or use the alarms which are sounding to not only to not only detect any fire but be able to tell exactly where the fire is located in the building. For reasons of cost, we want to use as few alarms as necessary. The first problem involves finding a minimum domination set of a graph. If the alarms are three state alarms capable of distinguishing between a fire in the same room as the alarm and a fire in an adjacent room, we are trying to find a minimum locating domination set. If the alarms are two state alarms that can only sound if there is a fire somewhere nearby, we are looking for a differentiating domination set of a graph. These three areas are the subject of much active research; we primarily focus on the third problem. An identifying code of a graph G is a dominating set C such that every vertex x of G is distinguished from other vertices by the set of vertices in C that are at distance at most r≥1 from x. When only vertices out of the code are asked to be identified, we get the related concept of a locating dominating set. The problem of finding an identifying code (resp a locating dominating code) of minimum size is a NP-hard problem, even when the input graph belongs to a number of specific graph classes. Therefore, we study this problem in some restricted classes of undirected graphs like split graph, line graph and path in a directed graph. Then we present some results on the identifying code by giving an exact value of upper total locating domination and a total 2-identifying code in directed and undirected graph. Moreover we determine exact values of locating dominating code and edge identifying code of thin headless spider and locating dominating code of complete suns.Keywords: identiying codes, locating dominating set, split graphs, thin headless spider
Procedia PDF Downloads 480762 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 117761 Analysis of Two-Echelon Supply Chain with Perishable Items under Stochastic Demand
Authors: Saeed Poormoaied
Abstract:
Perishability and developing an intelligent control policy for perishable items are the major concerns of marketing managers in a supply chain. In this study, we address a two-echelon supply chain problem for perishable items with a single vendor and a single buyer. The buyer adopts an aged-based continuous review policy which works by taking both the stock level and the aging process of items into account. The vendor works under the warehouse framework, where its lot size is determined with respect to the batch size of the buyer. The model holds for a positive and fixed lead time for the buyer, and zero lead time for the vendor. The demand follows a Poisson process and any unmet demand is lost. We provide exact analytic expressions for the operational characteristics of the system by using the renewal reward theorem. Items have a fixed lifetime after which they become unusable and are disposed of from the buyer's system. The age of items starts when they are unpacked and ready for the consumption at the buyer. When items are held by the vendor, there is no aging process which results in no perishing at the vendor's site. The model is developed under the centralized framework, which takes the expected profit of both vendor and buyer into consideration. The goal is to determine the optimal policy parameters under the service level constraint at the retailer's site. A sensitivity analysis is performed to investigate the effect of the key input parameters on the expected profit and order quantity in the supply chain. The efficiency of the proposed age-based policy is also evaluated through a numerical study. Our results show that when the unit perishing cost is negligible, a significant cost saving is achieved.Keywords: two-echelon supply chain, perishable items, age-based policy, renewal reward theorem
Procedia PDF Downloads 144760 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization
Authors: Younis Elhaddad, Alfonso Ortega
Abstract:
Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production
Procedia PDF Downloads 162759 Reaching New Levels: Using Systems Thinking to Analyse a Major Incident Investigation
Authors: Matthew J. I. Woolley, Gemma J. M. Read, Paul M. Salmon, Natassia Goode
Abstract:
The significance of high consequence, workplace failures within construction continues to resonate with a combined average of 12 fatal incidents occurring daily throughout Australia, the United Kingdom, and the United States. Within the Australian construction domain, more than 35 serious, compensable injury incidents are reported daily. These alarming figures, in conjunction with the continued occurrence of fatal and serious, occupational injury incidents globally suggest existing approaches to incident analysis may not be achieving required injury prevention outcomes. One reason may be that, incident analysis methods used in construction have not kept pace with advances in the field of safety science and are not uncovering the full range system-wide contributory factors that are required to achieve optimal levels of construction safety performance. Another reason underpinning this global issue may also be the absence of information surrounding the construction operating and project delivery system. For example, it is not clear who shares the responsibility for construction safety in different contexts. To respond to this issue, to the author’s best knowledge, a first of its kind, control structure model of the construction industry is presented and then used to analyse a fatal construction incident. The model was developed by applying and extending the Systems Theoretic and Incident Model and Process method to hierarchically represent the actors, constraints, feedback mechanisms, and relationships that are involved in managing construction safety performance. The Causal Analysis based on Systems Theory (CAST) method was then used to identify the control and feedback failures involved in the fatal incident. The conclusions from the Coronial investigation into the event are compared with the findings stemming from the CAST analysis. The CAST analysis highlighted additional issues across the construction system that were not identified in the coroner’s recommendations, suggested there is a potential benefit in applying a systems theory approach to incident analysis in construction. The findings demonstrate the utility applying systems theory-based methods to the analysis of construction incidents. Specifically, this study shows the utility of the construction control structure and the potential benefits for project leaders, construction entities, regulators, and construction clients in controlling construction performance.Keywords: construction project management, construction performance, incident analysis, systems thinking
Procedia PDF Downloads 131758 Comparative Study on Daily Discharge Estimation of Soolegan River
Authors: Redvan Ghasemlounia, Elham Ansari, Hikmet Kerem Cigizoglu
Abstract:
Hydrological modeling in arid and semi-arid regions is very important. Iran has many regions with these climate conditions such as Chaharmahal and Bakhtiari province that needs lots of attention with an appropriate management. Forecasting of hydrological parameters and estimation of hydrological events of catchments, provide important information that used for design, management and operation of water resources such as river systems, and dams, widely. Discharge in rivers is one of these parameters. This study presents the application and comparison of some estimation methods such as Feed-Forward Back Propagation Neural Network (FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) to predict the daily flow discharge of the Soolegan River, located at Chaharmahal and Bakhtiari province, in Iran. In this study, Soolegan, station was considered. This Station is located in Soolegan River at 51° 14՜ Latitude 31° 38՜ longitude at North Karoon basin. The Soolegan station is 2086 meters higher than sea level. The data used in this study are daily discharge and daily precipitation of Soolegan station. Feed Forward Back Propagation Neural Network(FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) models were developed using the same input parameters for Soolegan's daily discharge estimation. The results of estimation models were compared with observed discharge values to evaluate performance of the developed models. Results of all methods were compared and shown in tables and charts.Keywords: ANN, multi linear regression, Bayesian network, forecasting, discharge, gene expression programming
Procedia PDF Downloads 561757 The Emergence of Memory at the Nanoscale
Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann
Abstract:
Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.Keywords: memories, memdevices, memristors, nonequilibrium states
Procedia PDF Downloads 97756 A Data-Driven Agent Based Model for the Italian Economy
Authors: Michele Catalano, Jacopo Di Domenico, Luca Riccetti, Andrea Teglio
Abstract:
We develop a data-driven agent based model (ABM) for the Italian economy. We calibrate the model for the initial condition and parameters. As a preliminary step, we replicate the Monte-Carlo simulation for the Austrian economy. Then, we evaluate the dynamic properties of the model: the long-run equilibrium and the allocative efficiency in terms of disequilibrium patterns arising in the search and matching process for final goods, capital, intermediate goods, and credit markets. In this perspective, we use a randomized initial condition approach. We perform a robustness analysis perturbing the system for different parameter setups. We explore the empirical properties of the model using a rolling window forecast exercise from 2010 to 2022 to observe the model’s forecasting ability in the wake of the COVID-19 pandemic. We perform an analysis of the properties of the model with a different number of agents, that is, with different scales of the model compared to the real economy. The model generally displays transient dynamics that properly fit macroeconomic data regarding forecasting ability. We stress the model with a large set of shocks, namely interest policy, fiscal policy, and exogenous factors, such as external foreign demand for export. In this way, we can explore the most exposed sectors of the economy. Finally, we modify the technology mix of the various sectors and, consequently, the underlying input-output sectoral interdependence to stress the economy and observe the long-run projections. In this way, we can include in the model the generation of endogenous crisis due to the implied structural change, technological unemployment, and potential lack of aggregate demand creating the condition for cyclical endogenous crises reproduced in this artificial economy.Keywords: agent-based models, behavioral macro, macroeconomic forecasting, micro data
Procedia PDF Downloads 69755 Study on Novel Reburning Process for NOx Reduction by Oscillating Injection of Reburn Fuel
Authors: Changyeop Lee, Sewon Kim, Jongho Lee
Abstract:
Reburning technology has been developed to adopt various commercial combustion systems. Fuel lean reburning is an advanced reburning method to reduce NOx economically without using burnout air, however it is not easy to get high NOx reduction efficiency. In the fuel lean reburning system, the localized fuel rich eddies are used to establish partial fuel rich regions so that the NOx can react with hydrocarbon radical restrictively. In this paper, a new advanced reburning method which supplies reburn fuel with oscillatory motion is introduced to increase NOx reduction rate effectively. To clarify whether forced oscillating injection of reburn fuel can effectively reduce NOx emission, experimental tests were conducted in vertical combustion furnace. Experiments were performed in flames stabilized by a gas burner, which was mounted at the bottom of the furnace. The natural gas is used as both main and reburn fuel and total thermal input is about 40kW. The forced oscillating injection of reburn fuel is realized by electronic solenoid valve, so that fuel rich region and fuel lean region is established alternately. In the fuel rich region, NOx is converted to N2 by reburning reaction, however unburned hydrocarbon and CO is oxidized in fuel lean zone and mixing zone at downstream where slightly fuel lean region is formed by mixing of two regions. This paper reports data on flue gas emissions and temperature distribution in the furnace for a wide range of experimental conditions. All experimental data has been measured at steady state. The NOx reduction rate increases up to 41% by forced oscillating reburn motion. The CO emissions were shown to be kept at very low level. And this paper makes clear that in order to decrease NOx concentration in the exhaust when oscillating reburn fuel injection system is adopted, the control of factors such as frequency and duty ratio is very important.Keywords: NOx, CO, reburning, pollutant
Procedia PDF Downloads 288754 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach
Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar
Abstract:
Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI
Procedia PDF Downloads 153753 Transformer-Driven Multi-Category Classification for an Automated Academic Strand Recommendation Framework
Authors: Ma Cecilia Siva
Abstract:
This study introduces a Bidirectional Encoder Representations from Transformers (BERT)-based machine learning model aimed at improving educational counseling by automating the process of recommending academic strands for students. The framework is designed to streamline and enhance the strand selection process by analyzing students' profiles and suggesting suitable academic paths based on their interests, strengths, and goals. Data was gathered from a sample of 200 grade 10 students, which included personal essays and survey responses relevant to strand alignment. After thorough preprocessing, the text data was tokenized, label-encoded, and input into a fine-tuned BERT model set up for multi-label classification. The model was optimized for balanced accuracy and computational efficiency, featuring a multi-category classification layer with sigmoid activation for independent strand predictions. Performance metrics showed an F1 score of 88%, indicating a well-balanced model with precision at 80% and recall at 100%, demonstrating its effectiveness in providing reliable recommendations while reducing irrelevant strand suggestions. To facilitate practical use, the final deployment phase created a recommendation framework that processes new student data through the trained model and generates personalized academic strand suggestions. This automated recommendation system presents a scalable solution for academic guidance, potentially enhancing student satisfaction and alignment with educational objectives. The study's findings indicate that expanding the data set, integrating additional features, and refining the model iteratively could improve the framework's accuracy and broaden its applicability in various educational contexts.Keywords: tokenized, sigmoid activation, transformer, multi category classification
Procedia PDF Downloads 8752 Awarding Copyright Protection to Artificial Intelligence Technology for its Original Works: The New Way Forward
Authors: Vibhuti Amarnath Madhu Agrawal
Abstract:
Artificial Intelligence (AI) and Intellectual Property are two emerging concepts that are growing at a fast pace and have the potential of having a huge impact on the economy in the coming times. In simple words, AI is nothing but work done by a machine without any human intervention. It is a coded software embedded in a machine, which over a period of time, develops its own intelligence and begins to take its own decisions and judgments by studying various patterns of how people think, react to situations and perform tasks, among others. Intellectual Property, especially Copyright Law, on the other hand, protects the rights of individuals and Companies in content creation that primarily deals with application of intellect, originality and expression of the same in some tangible form. According to some of the reports shared by the media lately, ChatGPT, an AI powered Chatbot, has been involved in the creation of a wide variety of original content, including but not limited to essays, emails, plays and poetry. Besides, there have been instances wherein AI technology has given creative inputs for background, lights and costumes, among others, for films. Copyright Law offers protection to all of these different kinds of content and much more. Considering the two key parameters of Copyright – application of intellect and originality, the question, therefore, arises that will awarding Copyright protection to a person who has not directly invested his / her intellect in the creation of that content go against the basic spirit of Copyright laws? This study aims to analyze the current scenario and provide answers to the following questions: a. If the content generated by AI technology satisfies the basic criteria of originality and expression in a tangible form, why should such content be denied protection in the name of its creator, i.e., the specific AI tool / technology? B. Considering the increasing role and development of AI technology in our lives, should it be given the status of a ‘Legal Person’ in law? C. If yes, what should be the modalities of awarding protection to works of such Legal Person and management of the same? Considering the current trends and the pace at which AI is advancing, it is not very far when AI will start functioning autonomously in the creation of new works. Current data and opinions on this issue globally reflect that they are divided and lack uniformity. In order to fill in the existing gaps, data obtained from Copyright offices from the top economies of the world have been analyzed. The role and functioning of various Copyright Societies in these countries has been studied in detail. This paper provides a roadmap that can be adopted to satisfy various objectives, constraints and dynamic conditions related AI technology and its protection under Copyright Law.Keywords: artificial intelligence technology, copyright law, copyright societies, intellectual property
Procedia PDF Downloads 71751 Project Progress Prediction in Software Devlopment Integrating Time Prediction Algorithms and Large Language Modeling
Authors: Dong Wu, Michael Grenn
Abstract:
Managing software projects effectively is crucial for meeting deadlines, ensuring quality, and managing resources well. Traditional methods often struggle with predicting project timelines accurately due to uncertain schedules and complex data. This study addresses these challenges by combining time prediction algorithms with Large Language Models (LLMs). It makes use of real-world software project data to construct and validate a model. The model takes detailed project progress data such as task completion dynamic, team Interaction and development metrics as its input and outputs predictions of project timelines. To evaluate the effectiveness of this model, a comprehensive methodology is employed, involving simulations and practical applications in a variety of real-world software project scenarios. This multifaceted evaluation strategy is designed to validate the model's significant role in enhancing forecast accuracy and elevating overall management efficiency, particularly in complex software project environments. The results indicate that the integration of time prediction algorithms with LLMs has the potential to optimize software project progress management. These quantitative results suggest the effectiveness of the method in practical applications. In conclusion, this study demonstrates that integrating time prediction algorithms with LLMs can significantly improve the predictive accuracy and efficiency of software project management. This offers an advanced project management tool for the industry, with the potential to improve operational efficiency, optimize resource allocation, and ensure timely project completion.Keywords: software project management, time prediction algorithms, large language models (LLMS), forecast accuracy, project progress prediction
Procedia PDF Downloads 79750 Automatic and High Precise Modeling for System Optimization
Authors: Stephanie Chen, Mitja Echim, Christof Büskens
Abstract:
To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization
Procedia PDF Downloads 409749 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring
Authors: Younghoon Kim, Seoung Bum Kim
Abstract:
One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.Keywords: control chart, mixed integer programming, one-class classification, support vector data description
Procedia PDF Downloads 174748 Characterization of Soil Microbial Communities from Vineyard under a Spectrum of Drought Pressures in Sensitive Area of Mediterranean Region
Authors: Gianmaria Califano, Júlio Augusto Lucena Maciel, Olfa Zarrouk, Miguel Damasio, Jose Silvestre, Ana Margarida Fortes
Abstract:
Global warming, with rapid and sudden changes in meteorological conditions, is one of the major constraints to ensuring agricultural and crop resilience in the Mediterranean regions. Several strategies are being adopted to reduce the pressure of drought stress on grapevines at regional and local scales: improvements in the irrigation systems, adoption of interline cover crops, and adaptation of pruning techniques. However, still, more can be achieved if also microbial compartments associated with plants are considered in crop management. It is known that the microbial community change according to several factors such as latitude, plant variety, age, rootstock, soil composition and agricultural management system. Considering the increasing pressure of the biotic and abiotic stresses, it is of utmost necessity to also evaluate the effects of drought on the microbiome associated with the grapevine, which is a commercially important crop worldwide. In this study, we characterize the diversity and the structure of the microbial community under three long-term irrigation levels (100% ETc, 50% ETc and rain-fed) in a drought-tolerant grapevine cultivar present worldwide, Syrah. To avoid the limitations of culture-dependent methods, amplicon sequencing with target primers for bacteria and fungi was applied to the same soil samples. The use of the DNeasy PowerSoil (Qiagen) extraction kit required further optimization with the use of lytic enzymes and heating steps to improve DNA yield and quality systematically across biological treatments. Target regions (16S rRNA and ITS genes) of our samples are being sequenced with Illumina technology. With bioinformatic pipelines, it will be possible to obtain a characterization of the bacterial and fungal diversity, structure and composition. Further, the microbial communities will be assessed for their functional activity, which remains an important metric considering the strong inter-kingdom interactions existing between plants and their associated microbiome. The results of this study will lay the basis for biotechnological applications: in combination with the establishment of a bacterial library, it will be possible to explore the possibility of testing synthetic microbial communities to support plant resistance to water scarcity.Keywords: microbiome, metabarcoding, soil, vinegrape, syrah, global warming, crop sustainability
Procedia PDF Downloads 123747 Analyzing Oil Seeps Manifestations and Petroleum Impregnation in Northwestern Tunisia From Aliphatic Biomarkers and Statistical Data
Authors: Sawsen Jarray, Tahani Hallek, Mabrouk Montacer
Abstract:
The tectonically damaged terrain in Tunisia's Northwest is seen in the country's numerous oil leaks. Finding a genetic link between these oil seeps and the area's putative source rocks is the goal of this investigation. Here, we use aliphatic biomarkers assessed by GC-MS to describe the organic geochemical data of 18 oil seeps samples and 4 source rocks (M'Cherga, Fahdene, Bahloul, and BouDabbous). In order to establish correlations between oil and oil and oil and source rock, terpanes, hopanes, and steranes biomarkers were identified. The source rocks under study were deposited in a marine environment and were suboxic, with minor signs of continental input for the M'Cherga Formation. There is no connection between the Fahdene and Bahloul source rocks and the udied oil seeps. According to the biomarkers C27 18-22,29,30trisnorneohopane (Ts) and C27 17-22,29,30-trisnorhopane (Tm), these source rocks are mature and have reached the oil window. Regarding oil seeps, geochemical data indicate that, with the exception of four samples that showed some continental markings, the bulk of samples were deposited in an open marine environment. These most recent samples from oil seeps have a unique lithology (marl) that distinguishes them from the others (carbonate). There are two classes of oil seeps, according to statistical analysis of relationships between oil and oil and oil and source rocks. The first comprised samples that showed a positive connection with carbonate-lithological and marine-derived BouDabbous black shales. The second is a result of M'Cherga source rock and is made up of oil seeps with remnants of the terrestrial environment and a lithology with a marl trend. The Fahdene and Bahloul source rocks have no connection to the observed oil seeps. There are two different types of hydrocarbon spills depending on their link to tectonic deformations (oil seeps) and outcropping mature source rocks (oil impregnations), in addition to the existence of two generations of hydrocarbon spills in Northwest Tunisia (Lower Cretaceous/Ypresian).Keywords: petroleum seeps, source rocks, biomarkers, statistic, Northern Tunisia
Procedia PDF Downloads 69746 Economic Assessment of the Fish Solar Tent Dryers
Authors: Collen Kawiya
Abstract:
In an effort of reducing post-harvest losses and improving the supply of quality fish products in Malawi, the fish solar tent dryers have been designed in the southern part of Lake Malawi for processing small fish species under the project of Cultivate Africa’s Future (CultiAF). This study was done to promote the adoption of the fish solar tent dryers by the many small scale fish processors in Malawi through the assessment of the economic viability of these dryers. With the use of the project’s baseline survey data, a business model for a constructed ‘ready for use’ solar tent dryer was developed where investment appraisal techniques were calculated in addition with the sensitivity analysis. The study also conducted a risk analysis through the use of the Monte Carlo simulation technique and a probabilistic net present value was found. The investment appraisal results showed that the net present value was US$8,756.85, the internal rate of return was 62% higher than the 16.32% cost of capital and the payback period was 1.64 years. The sensitivity analysis results showed that only two input variables influenced the fish solar dryer investment’s net present value. These are the dried fish selling prices that were correlating positively with the net present value and the fresh fish buying prices that were negatively correlating with the net present value. Risk analysis results showed that the chances that fish processors will make a loss from this type of investment are 17.56%. It was also observed that there exist only a 0.20 probability of experiencing a negative net present value from this type of investment. Lastly, the study found that the net present value of the fish solar tent dryer’s investment is still robust in spite of any changes in the levels of investors risk preferences. With these results, it is concluded that the fish solar tent dryers in Malawi are an economically viable investment because they are able to improve the returns in the fish processing activity. As such, fish processors need to adopt them by investing their money to construct and use them.Keywords: investment appraisal, risk analysis, sensitivity analysis, solar tent drying
Procedia PDF Downloads 278745 Theoretical Comparisons and Empirical Illustration of Malmquist, Hicks–Moorsteen, and Luenberger Productivity Indices
Authors: Fatemeh Abbasi, Sahand Daneshvar
Abstract:
Productivity is one of the essential goals of companies to improve performance, which as a strategy-oriented method, determines the basis of the company's economic growth. The history of productivity goes back centuries, but most researchers defined productivity as the relationship between a product and the factors used in production in the early twentieth century. Productivity as the optimal use of available resources means that "more output using less input" can increase companies' economic growth and prosperity capacity. Also, having a quality life based on economic progress depends on productivity growth in that society. Therefore, productivity is a national priority for any developed country. There are several methods for calculating productivity growth measurements that can be divided into parametric and non-parametric methods. Parametric methods rely on the existence of a function in their hypotheses, while non-parametric methods do not require a function based on empirical evidence. One of the most popular non-parametric methods is Data Envelopment Analysis (DEA), which measures changes in productivity over time. The DEA evaluates the productivity of decision-making units (DMUs) based on mathematical models. This method uses multiple inputs and outputs to compare the productivity of similar DMUs such as banks, government agencies, companies, airports, Etc. Non-parametric methods are themselves divided into the frontier and non frontier approaches. The Malmquist productivity index (MPI) proposed by Caves, Christensen, and Diewert (1982), the Hicks–Moorsteen productivity index (HMPI) proposed by Bjurek (1996), or the Luenberger productivity indicator (LPI) proposed by Chambers (2002) are powerful tools for measuring productivity changes over time. This study will compare the Malmquist, Hicks–Moorsteen, and Luenberger indices theoretically and empirically based on DEA models and review their strengths and weaknesses.Keywords: data envelopment analysis, Hicks–Moorsteen productivity index, Leuenberger productivity indicator, malmquist productivity index
Procedia PDF Downloads 194744 Thriving Private-Community Partnerships in Ecotourism: Perspectives from Fiji’s Upper Navua Conservation Area
Authors: Jeremy Schultz, Kelly Bricker
Abstract:
Ecotourism has proven itself to be a forerunner in the advancement of environmental conservation all the while supporting cultural tradition, uniqueness, and pride among indigenous communities. Successful private-community partnerships associated with ecotourism operations are vital to the overall prosperity of both the businesses and the local communities. Such accomplishments can be seen through numerous livelihood goals including income, food security, health, reduced vulnerability, governance, and empowerment. Private-community partnerships also support global initiatives such as the sustainable development goals and sustainable development frameworks including those proposed by the United Nations World Tourism Organization (WTO). Understanding such partnerships assists not only large organizations such as the WTO, but it also benefits smaller ecotourism operators and entrepreneurs who are trying to achieve their sustainable tourism development goals. This study examined the partnership between an ecotourism company (Rivers Fiji) and two rural villages located in Fiji’s Upper Navua Conservation Area. Focus groups were conducted in each village. Observation journals were also used to record conversations outside of the focus groups. Data were thematically organized and analyzed to offer researcher interpretations and understandings. This research supported the notion that respectful and emboldening partnerships between communities and private enterprise are vital to the composition of successful ecotourism operations that support sustainable development protocol. Understanding these partnerships can assist in shaping future ecotourism development and re-molding existing businesses. This study has offered an example of a thriving partnership through community input and critical researcher analysis. Research has identified six contributing factors to successful ecotourism partnerships, and this study provides additional support to that framework.Keywords: community partnerships, conservation areas, ecotourism, Fiji, sustainability
Procedia PDF Downloads 135743 Design and Implementation of 3kVA Grid-Tied Transformerless Power Inverter for Solar Photovoltaic Application
Authors: Daniel O. Johnson, Abiodun A. Ogunseye, Aaron Aransiola, Majors Samuel
Abstract:
Power Inverter is a very important device in renewable energy use particularly for solar photovoltaic power application because it is the effective interface between the DC power generator and the load or the grid. Transformerless inverter is getting more and more preferred to the power converter with galvanic isolation transformer and may eventually supplant it. Transformerless inverter offers advantages of improved DC to AC conversion and power delivery efficiency; and reduced system cost, weight and complexity. This work presents thorough analysis of the design and prototyping of 3KVA grid-tie transformerless inverter. The inverter employs electronic switching method with minimised heat generation in the system and operates based on the principle of pulse-width modulation (PWM). The design is such that it can take two inputs, one from PV arrays and the other from Battery Energy Storage BES and addresses the safety challenge of leakage current. The inverter system was designed around microcontroller system, modeled with Proteus® software for simulation and testing of the viability of the designed inverter circuit. The firmware governing the operation of the grid-tied inverter is written in C language and was developed using MicroC software by Mikroelectronica® for writing sine wave signal code for synchronization to the grid. The simulation results show that the designed inverter circuit performs excellently with very high efficiency, good quality sinusoidal output waveform, negligible harmonics and gives very stable performance under voltage variation from 36VDC to 60VDC input. The prototype confirmed the simulated results and was successfully synchronized with the utility supply. The comprehensive analyses of the circuit design, the prototype and explanation on overall performance will be presented.Keywords: grid-tied inverter, leakage current, photovoltaic system, power electronic, transformerless inverter
Procedia PDF Downloads 292742 Active Control Effects on Dynamic Response of Elevated Water Storage Tanks
Authors: Ali Etemadi, Claudia Fernanda Yasar
Abstract:
Elevated water storage tank structures (EWSTs) are high elevated-ponderous structural systems and very vulnerable to seismic vibrations. In past earthquake events, many of these structures exhibit poor performance and experienced severe damage. The dynamic analysis of the EWSTs under earthquake loads is, therefore, of significant importance for the design of the structure and a key issue for the development of modern methods, such as active control design. In this study, a reduced model of the EWSTs is explained, which is based on a tuned mass damper model (TMD). Vibration analysis of a structure under seismic excitation is presented and then used to propose an active vibration controller. MATLAB/Simulink is employed for dynamic analysis of the system and control of the seismic response. A single degree of freedom (SDOF) and two degree of freedom (2DOF) models of ELSTs are going to be used to study the concept of active vibration control. Lab-scale experimental models similar to pendulum are applied to suppress vibrations in ELST under seismic excitation. One of the most important phenomena in liquid storage tanks is the oscillation of fluid due to the movements of the tank body because of its base motions during an earthquake. Simulation results illustrate that the EWSTs vibration can be reduced by means of an input shaping technique that takes into account the dominant mode shape of the structure. Simulations with which to guide many of our designs are presented in detail. A simple and effective real-time control for seismic vibration damping can be, therefore, design and built-in practice.Keywords: elevated water storage tank, tuned mass damper model, real time control, shaping control, seismic vibration control, the laplace transform
Procedia PDF Downloads 152741 Nowcasting Indonesian Economy
Authors: Ferry Kurniawan
Abstract:
In this paper, we nowcast quarterly output growth in Indonesia by exploiting higher frequency data (monthly indicators) using a mixed-frequency factor model and exploiting both quarterly and monthly data. Nowcasting quarterly GDP in Indonesia is particularly relevant for the central bank of Indonesia which set the policy rate in the monthly Board of Governors Meeting; whereby one of the important step is the assessment of the current state of the economy. Thus, having an accurate and up-to-date quarterly GDP nowcast every time new monthly information becomes available would clearly be of interest for central bank of Indonesia, for example, as the initial assessment of the current state of the economy -including nowcast- will be used as input for longer term forecast. We consider a small scale mixed-frequency factor model to produce nowcasts. In particular, we specify variables as year-on-year growth rates thus the relation between quarterly and monthly data is expressed in year-on-year growth rates. To assess the performance of the model, we compare the nowcasts with two other approaches: autoregressive model –which is often difficult when forecasting output growth- and Mixed Data Sampling (MIDAS) regression. In particular, both mixed frequency factor model and MIDAS nowcasts are produced by exploiting the same set of monthly indicators. Hence, we compare the nowcasts performance of the two approaches directly. To preview the results, we find that by exploiting monthly indicators using mixed-frequency factor model and MIDAS regression we improve the nowcast accuracy over a benchmark simple autoregressive model that uses only quarterly frequency data. However, it is not clear whether the MIDAS or mixed-frequency factor model is better. Neither set of nowcasts encompasses the other; suggesting that both nowcasts are valuable in nowcasting GDP but neither is sufficient. By combining the two individual nowcasts, we find that the nowcast combination not only increases the accuracy - relative to individual nowcasts- but also lowers the risk of the worst performance of the individual nowcasts.Keywords: nowcasting, mixed-frequency data, factor model, nowcasts combination
Procedia PDF Downloads 331740 Control Algorithm Design of Single-Phase Inverter For ZnO Breakdown Characteristics Tests
Authors: Kashif Habib, Zeeshan Ayyub
Abstract:
ZnO voltage dependent resistor was widely used as components of the electrical system for over-voltage protection. It has a wide application prospect in superconducting energy-removal, generator de-excitation, overvoltage protection of electrical & electronics equipment. At present, the research for the application of ZnO voltage dependent resistor stop, it uses just in the field of its nonlinear voltage current characteristic and overvoltage protection areas. There is no further study over the over-voltage breakdown characteristics, such as the combustion phenomena and the measure of the voltage/current when it breakdown, and the affect to its surrounding equipment. It is also a blind spot in its application. So, when we do the feature test of ZnO voltage dependent resistor, we need to design a reasonable test power supply, making the terminal voltage keep for sine wave, simulating the real use of PF voltage in power supply conditions. We put forward the solutions of using inverter to generate a controllable power. The paper mainly focuses on the breakdown characteristic test power supply of nonlinear ZnO voltage dependent resistor. According to the current mature switching power supply technology, we proposed power control system using the inverter as the core. The power mainly realize the sin-voltage output on the condition of three-phase PF-AC input, and 3 control modes (RMS, Peak, Average) of the current output. We choose TMS320F2812M as the control part of the hardware platform. It is used to convert the power from three-phase to a controlled single-phase sin-voltage through a rectifier, filter, and inverter. Design controller produce SPWM, to get the controlled voltage source via appropriate multi-loop control strategy, while execute data acquisition and display, system protection, start logic control, etc. The TMS320F2812M is able to complete the multi-loop control quickly and can be a good completion of the inverter output control.Keywords: ZnO, multi-loop control, SPWM, non-linear load
Procedia PDF Downloads 325739 An Ethnographic Study of Workforce Integration of Health Care Workers with Refugee Backgrounds in Ageing Citizens in Germany
Authors: A. Ham, A. Kuckert-Wostheinrich
Abstract:
Demographic changes, like the ageing population in European countries and shortage of nursing staff, the increasing number of people with severe cognitive impairment, and elderly socially isolated people raise important questions about who will provide long-term care for ageing citizens. Due to the so-called refugee crisis in 2015, some health care institutions for ageing citizens in Europe invited first generation immigrants to start a nursing career and providing them language skills, nursing training, and internships. The aim of this ethnographic research was to explore the social processes affecting workforce integration and how newcomers enact good care in ageing citizens in a German nursing home. By ethnographic fieldwork, 200 hours of participant observations, 25 in-depth interviews with immigrants and established staff, 2 focus groups with 6 immigrants, and 6 established staff members, data were analysed. The health care institution provided the newcomers a nursing program on psychogeriatric theory and nursing skills in the psychogeriatric field and professional oriented language skills. Courses of health prevention and theater plays accompanied the training. The knowledge learned in education could be applied in internships on the wards. Additionally, diversity and inclusivity courses were given to established personal for cultural awareness and sensitivity. They learned to develop a collegial attitude of respect and appreciation, regardless of gender, nationality, ethnicity, religion or belief, age sexual orientation, or disability and identity. The qualitative data has shown that social processes affected workforce integration, like organizational constraints, staff shortages, and a demanding workload. However, zooming in on the interactions between newcomers and residents, we noticed how they tinkered to enact good care by embodied caring, playing games, singing and dancing. By situational acting and practical wisdom in nursing care, the newcomers could meet the needs of ageing residents. Thus, when health care institutions open up nursing programs for newcomers with refugees’ backgrounds and focus on talent instead of shortcomings, we might as well stimulate the unknown competencies, attitudes, skills, and expertise of newcomers and create excellent nurses for excellent care.Keywords: established staff, Germany, nursing, refugees
Procedia PDF Downloads 105