Search results for: gaming asset automation
92 Internet-Of-Things and Ergonomics, Increasing Productivity and Reducing Waste: A Case Study
Authors: V. Jaime Contreras, S. Iliana Nunez, S. Mario Sanchez
Abstract:
Inside a manufacturing facility, we can find innumerable automatic and manual operations, all of which are relevant to the production process. Some of these processes add more value to the products more than others. Manual operations tend to add value to the product since they can be found in the final assembly area o final operations of the process. In this areas, where a mistake or accident can increase the cost of waste exponentially. To reduce or mitigate these costly mistakes, one approach is to rely on automation to eliminate the operator from the production line - requires a hefty investment and development of specialized machinery. In our approach, the center of the solution is the operator through sufficient and adequate instrumentation, real-time reporting and ergonomics. Efficiency and reduced cycle time can be achieved thorough the integration of Internet-of-Things (IoT) ready technologies into assembly operations to enhance the ergonomics of the workstations. Augmented reality visual aids, RFID triggered personalized workstation dimensions and real-time data transfer and reporting can help achieve these goals. In this case study, a standard work cell will be used for real-life data acquisition and a simulation software to extend the data points beyond the test cycle. Three comparison scenarios will run in the work cell. Each scenario will introduce a dimension of the ergonomics to measure its impact independently. Furthermore, the separate test will determine the limitations of the technology and provide a reference for operating costs and investment required. With the ability, to monitor costs, productivity, cycle time and scrap/waste in real-time the ROI (return on investment) can be determined at the different levels to integration. This case study will help to show that ergonomics in the assembly lines can make significant impact when IoT technologies are introduced. Ergonomics can effectively reduce waste and increase productivity with minimal investment if compared with setting up to custom machine.Keywords: augmented reality visual aids, ergonomics, real-time data acquisition and reporting, RFID triggered workstation dimensions
Procedia PDF Downloads 21491 Spatial Design Transformation of Mount Merapi's Dwellings Using Diachronic Approach
Authors: Catharina Dwi Astuti Depari, Gregorius Agung Setyonugroho
Abstract:
In concern for human safety, living in disaster-prone areas is twofold: it is profoundly cataclysmic yet perceptibly contributive. This paradox could be identified in Kalitengah Lor Sub-village community who inhabit Mount Merapi’s most hazardous area, putting them to the highest exposure to eruptions’ cataclysmic impacts. After the devastating incident in 2010, through the Action Plan for Rehabilitation and Reconstruction, the National Government with immediate aid from humanitarian agencies initiated a relocation program by establishing nearly 2,613 temporary shelters throughout the mountain’s region. The problem arose as some of the most affected communities including those in Kalitengah Lor Sub-village, persistently refused to relocate. The obnoxious experience of those living in temporary shelters resulted from the program’s failure to support a long-term living was assumed to instigate the rejection. From the psychological standpoint, this phenomenon reflects the emotional bond between the affected communities with their former dwellings. Regarding this, the paper aims to reveal the factors influencing the emotional attachment of Kalitengah Lor community to their former dwellings including the dwellings’ spatial design transformation prior and post the eruption in 2010. The research adopted Likert five scale-questionnaire comprising a wide range of responses from strongly agree to strongly disagree. The responses were then statistically measured, leading to consensus that provides bases for further interpretations toward the local’s characteristics. Using purposive unit sampling technique, 50 respondents from 217 local households were randomly selected. Questions in the questionnaire were developed with concerns on the aspects of place attachment concept: affection, cognitive, behavior, and perception. Combined with quantitative method, the research adopted diachronic method which was aimed to analyze the spatial design transformation of each dwelling in relation to the inhabitant’s daily activities and personal preferences. The research found that access to natural resources like sand mining, agricultural farms and wood forests, social relationship and physical proximity from house to personal asset like cattle shed, are the dominant factors encouraging the locals to emotionally attached to their former dwellings. Consequently, each dwelling’s spatial design is suffered from changes in which the current house is typically larger in dimension and the bathroom is replaced by public toilet located outside the house’s backyard. Relatively unchanged, the cattle shed is still located in front of the house, the continuous visual relationship, particularly between the living and family room, is maintained, as well as the main orientation of the house towards the local street.Keywords: diachronic method, former dwellings, local’s characteristics, place attachment, spatial design transformation
Procedia PDF Downloads 16790 Developing a Web-Based Tender Evaluation System Based on Fuzzy Multi-Attributes Group Decision Making for Nigerian Public Sector Tendering
Authors: Bello Abdullahi, Yahaya M. Ibrahim, Ahmed D. Ibrahim, Kabir Bala
Abstract:
Public sector tendering has traditionally been conducted using manual paper-based processes which are known to be inefficient, less transparent and more prone to manipulations and errors. The advent of the Internet and the World Wide Web has led to the development of numerous e-Tendering systems that addressed some of the problems associated with the manual paper-based tendering system. However, most of these systems rarely support the evaluation of tenders and where they do it is mostly based on the single decision maker which is not suitable in public sector tendering, where for the sake of objectivity, transparency, and fairness, it is required that the evaluation is conducted through a tender evaluation committee. Currently, in Nigeria, the public tendering process in general and the evaluation of tenders, in particular, are largely conducted using manual paper-based processes. Automating these manual-based processes to digital-based processes can help in enhancing the proficiency of public sector tendering in Nigeria. This paper is part of a larger study to develop an electronic tendering system that supports the whole tendering lifecycle based on Nigerian procurement law. Specifically, this paper presents the design and implementation of part of the system that supports group evaluation of tenders based on a technique called fuzzy multi-attributes group decision making. The system was developed using Object-Oriented methodologies and Unified Modelling Language and hypothetically applied in the evaluation of technical and financial proposals submitted by bidders. The system was validated by professionals with extensive experiences in public sector procurement. The results of the validation showed that the system called NPS-eTender has an average rating of 74% with respect to correct and accurate modelling of the existing manual tendering domain and an average rating of 67.6% with respect to its potential to enhance the proficiency of public sector tendering in Nigeria. Thus, based on the results of the validation, the automation of the evaluation process to support tender evaluation committee is achievable and can lead to a more proficient public sector tendering system.Keywords: e-Tendering, e-Procurement, group decision making, tender evaluation, tender evaluation committee, UML, object-oriented methodologies, system development
Procedia PDF Downloads 26189 Adopting a New Policy in Maritime Law for Protecting Ship Mortgagees Against Maritime Liens
Authors: Mojtaba Eshraghi Arani
Abstract:
Ship financing is the vital element in the development of shipping industry because while the ship constitutes the owners’ main asset, she is considered a reliable security in the financiers’ viewpoint as well. However, it is most probable that a financier who has accepted a ship as security will face many creditors who are privileged and rank before him for collecting, out of the ship, the money that they are owed. In fact, according to the current rule of maritime law, which was established by “Convention Internationale pour l’Unification de Certaines Règles Relatives aux Privilèges et Hypothèques Maritimes, Brussels, 10 April 1926”, the mortgages, hypotheques, and other charges on vessels rank after several secured claims referred to as “maritime liens”. Such maritime liens are an exhaustive list of claims including but not limited to “expenses incurred in the common interest of the creditors to preserve the vessel or to procure its sale and the distribution of the proceeds of sale”, “tonnage dues, light or harbour dues, and other public taxes and charges of the same character”, “claims arising out of the contract of engagement of the master, crew and other persons hired on board”, “remuneration for assistance and salvage”, “the contribution of the vessel in general average”, “indemnities for collision or other damage caused to works forming part of harbours, docks, etc,” “indemnities for personal injury to passengers or crew or for loss of or damage to cargo”, “claims resulting form contracts entered into or acts done by the master”. The same rule survived with only some minor change in the categories of maritime liens in the substitute conventions 1967 and 1993. The status que in maritime law have always been considered as a major obstacle to the development of shipping market and has inevitably led to increase in the interest rates and other related costs of ship financing. It seems that the national and international policy makers have yet to change their mind being worried about the deviation from the old marine traditions. However, it is crystal clear that the continuation of status que will harm, to a great extent, the shipowners and, consequently, the international merchants as a whole. It is argued in this article that the raison d'être for many categories of maritime liens cease to exist anymore, in view of which, the international community has to recognize only a minimum category of maritime liens which are created in the common interests of all creditors; to this effect, only two category of “compensation due for the salvage of ship” and “extraordinary expenses indispensable for the preservation of the ship” can be declared as taking priority over the mortgagee rights, in anology with the Geneva Convention on the International Recognition of Rights in Aircrafts (1948). A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and domestic laws.Keywords: ship finance, mortgage, maritime liens, brussels convenion, geneva convention 1948
Procedia PDF Downloads 7288 Geographic Information Systems and a Breath of Opportunities for Supply Chain Management: Results from a Systematic Literature Review
Authors: Anastasia Tsakiridi
Abstract:
Geographic information systems (GIS) have been utilized in numerous spatial problems, such as site research, land suitability, and demographic analysis. Besides, GIS has been applied in scientific fields like geography, health, and economics. In business studies, GIS has been used to provide insights and spatial perspectives in demographic trends, spending indicators, and network analysis. To date, the information regarding the available usages of GIS in supply chain management (SCM) and how these analyses can benefit businesses is limited. A systematic literature review (SLR) of the last 5-year peer-reviewed academic literature was conducted, aiming to explore the existing usages of GIS in SCM. The searches were performed in 3 databases (Web of Science, ProQuest, and Business Source Premier) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology. The analysis resulted in 79 papers. The results indicate that the existing GIS applications used in SCM were in the following domains: a) network/ transportation analysis (in 53 of the papers), b) location – allocation site search/ selection (multiple-criteria decision analysis) (in 45 papers), c) spatial analysis (demographic or physical) (in 34 papers), d) combination of GIS and supply chain/network optimization tools (in 32 papers), and e) visualization/ monitoring or building information modeling applications (in 8 papers). An additional categorization of the literature was conducted by examining the usage of GIS in the supply chain (SC) by the business sectors, as indicated by the volume of the papers. The results showed that GIS is mainly being applied in the SC of the biomass biofuel/wood industry (33 papers). Other industries that are currently utilizing GIS in their SC were the logistics industry (22 papers), the humanitarian/emergency/health care sector (10 papers), the food/agro-industry sector (5 papers), the petroleum/ coal/ shale gas sector (3 papers), the faecal sludge sector (2 papers), the recycle and product footprint industry (2 papers), and the construction sector (2 papers). The results were also presented by the geography of the included studies and the GIS software used to provide critical business insights and suggestions for future research. The results showed that research case studies of GIS in SCM were conducted in 26 countries (mainly in the USA) and that the most prominent GIS software provider was the Environmental Systems Research Institute’s ArcGIS (in 51 of the papers). This study is a systematic literature review of the usage of GIS in SCM. The results showed that the GIS capabilities could offer substantial benefits in SCM decision-making by providing key insights to cost minimization, supplier selection, facility location, SC network configuration, and asset management. However, as presented in the results, only eight industries/sectors are currently using GIS in their SCM activities. These findings may offer essential tools to SC managers who seek to optimize the SC activities and/or minimize logistic costs and to consultants and business owners that want to make strategic SC decisions. Furthermore, the findings may be of interest to researchers aiming to investigate unexplored research areas where GIS may improve SCM.Keywords: supply chain management, logistics, systematic literature review, GIS
Procedia PDF Downloads 14287 Storage Assignment Strategies to Reduce Manual Picking Errors with an Emphasis on an Ageing Workforce
Authors: Heiko Diefenbach, Christoph H. Glock
Abstract:
Order picking, i.e., the order-based retrieval of items in a warehouse, is an important time- and cost-intensive process for many logistic systems. Despite the ongoing trend of automation, most order picking systems are still manual picker-to-parts systems, where human pickers walk through the warehouse to collect ordered items. Human work in warehouses is not free from errors, and order pickers may at times pick the wrong or the incorrect number of items. Errors can cause additional costs and significant correction efforts. Moreover, age might increase a person’s likelihood to make mistakes. Hence, the negative impact of picking errors might increase for an aging workforce currently witnessed in many regions globally. A significant amount of research has focused on making order picking systems more efficient. Among other factors, storage assignment, i.e., the assignment of items to storage locations (e.g., shelves) within the warehouse, has been subject to optimization. Usually, the objective is to assign items to storage locations such that order picking times are minimized. Surprisingly, there is a lack of research concerned with picking errors and respective prevention approaches. This paper hypothesize that the storage assignment of items can affect the probability of pick errors. For example, storing similar-looking items apart from one other might reduce confusion. Moreover, storing items that are hard to count or require a lot of counting at easy-to-access and easy-to-comprehend self heights might reduce the probability to pick the wrong number of items. Based on this hypothesis, the paper discusses how to incorporate error-prevention measures into mathematical models for storage assignment optimization. Various approaches with respective benefits and shortcomings are presented and mathematically modeled. To investigate the newly developed models further, they are compared to conventional storage assignment strategies in a computational study. The study specifically investigates how the importance of error prevention increases with pickers being more prone to errors due to age, for example. The results suggest that considering error-prevention measures for storage assignment can reduce error probabilities with only minor decreases in picking efficiency. The results might be especially relevant for an aging workforce.Keywords: an aging workforce, error prevention, order picking, storage assignment
Procedia PDF Downloads 20486 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions
Authors: Vikrant Gupta, Amrit Goswami
Abstract:
The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition
Procedia PDF Downloads 13685 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets
Authors: Yih-Wenn Laih
Abstract:
In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.Keywords: co-movement, depreciation of Yen, rank correlation, stock market
Procedia PDF Downloads 23184 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 10583 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings
Authors: Chen Wang, Jared Evans, Yan Asmann
Abstract:
With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing
Procedia PDF Downloads 25782 Variable Mapping: From Bibliometrics to Implications
Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek
Abstract:
Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.Keywords: bibliometrics, literature review, science mapping, variable mapping
Procedia PDF Downloads 12081 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis
Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio
Abstract:
Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction
Procedia PDF Downloads 30980 Evaluation of Teaching Team Stress Factors in Two Engineering Education Programs
Authors: Kari Bjorn
Abstract:
Team learning has been studied and modeled as double loop model and its variations. Also, metacognition has been suggested as a concept to describe the nature of team learning to be more than a simple sum of individual learning of the team members. Team learning has a positive correlation with both individual motivation of its members, as well as the collective factors within the team. Team learning of previously very independent members of two teaching teams is analyzed. Applied Science Universities are training future professionals with ever more diversified and multidisciplinary skills. The size of the units of teaching and learning are increasingly larger for several reasons. First, multi-disciplinary skill development requires more active learning and richer learning environments and learning experiences. This occurs on students teams. Secondly, teaching of multidisciplinary skills requires a multidisciplinary and team-based teaching from the teachers as well. Team formation phases have been identifies and widely accepted. Team role stress has been analyzed in project teams. Projects typically have a well-defined goal and organization. This paper explores team stress of two teacher teams in a parallel running two course units in engineering education. The first is an Industrial Automation Technology and the second is Development of Medical Devices. The courses have a separate student group, and they are in different campuses. Both are run in parallel within 8 week time. Both of them are taught by a group of four teachers with several years of teaching experience, but individually. The team role stress scale items - the survey is done to both teaching groups at the beginning of the course and at the end of the course. The inventory of questions covers the factors of ambiguity, conflict, quantitative role overload and qualitative role overload. Some comparison to the study on project teams can be drawn. Team development stage of the two teaching groups is different. Relating the team role stress factors to the development stage of the group can reveal the potential of management actions to promote team building and to understand the maturity of functional and well-established teams. Mature teams indicate higher job satisfaction and deliver higher performance. Especially, teaching teams who deliver highly intangible results of learning outcome are sensitive to issues in the job satisfaction and team conflicts. Because team teaching is increasing, the paper provides a review of the relevant theories and initial comparative and longitudinal results of the team role stress factors applied to teaching teams.Keywords: engineering education, stress, team role, team teaching
Procedia PDF Downloads 22579 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 29778 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings
Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian
Abstract:
Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM
Procedia PDF Downloads 10977 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports
Authors: Jonardan Koner, Avinash Purandare
Abstract:
In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders
Procedia PDF Downloads 13176 Barriers to Business Model Innovation in the Agri-Food Industry
Authors: Pia Ulvenblad, Henrik Barth, Jennie Cederholm BjöRklund, Maya Hoveskog, Per-Ola Ulvenblad
Abstract:
The importance of business model innovation (BMI) is widely recognized. This is also valid for firms in the agri-food industry, closely connected to global challenges. Worldwide food production will have to increase 70% by 2050 and the United Nations’ sustainable development goals prioritize research and innovation on food security and sustainable agriculture. The firms of the agri-food industry have opportunities to increase their competitive advantage through BMI. However, the process of BMI is complex and the implementation of new business models is associated with high degree of risk and failure. Thus, managers from all industries and scholars need to better understand how to address this complexity. Therefore, the research presented in this paper (i) explores different categories of barriers in research literature on business models in the agri-food industry, and (ii) illustrates categories of barriers with empirical cases. This study is addressing the rather limited understanding on barriers for BMI in the agri-food industry, through a systematic literature review (SLR) of 570 peer-reviewed journal articles that contained a combination of ‘BM’ or ‘BMI’ with agriculture-related and food-related terms (e.g. ‘agri-food sector’) published in the period 1990-2014. The study classifies the barriers in several categories and illustrates the identified barriers with ten empirical cases. Findings from the literature review show that barriers are mainly identified as outcomes. It can be assumed that a perceived barrier to growth can often be initially exaggerated or underestimated before being challenged by appropriate measures or courses of action. What may be considered by the public mind to be a barrier could in reality be very different from an actual barrier that needs to be challenged. One way of addressing barriers to growth is to define barriers according to their origin (internal/external) and nature (tangible/intangible). The framework encompasses barriers related to the firm (internal addressing in-house conditions) or to the industrial or national levels (external addressing environmental conditions). Tangible barriers can include asset shortages in the area of equipment or facilities, while human resources deficiencies or negative willingness towards growth are examples of intangible barriers. Our findings are consistent with previous research on barriers for BMI that has identified human factors barriers (individuals’ attitudes, histories, etc.); contextual barriers related to company and industry settings; and more abstract barriers (government regulations, value chain position, and weather). However, human factor barriers – and opportunities - related to family-owned businesses with idealistic values and attitudes and owning the real estate where the business is situated, are more frequent in the agri-food industry than other industries. This paper contributes by generating a classification of the barriers for BMI as well as illustrating them with empirical cases. We argue that internal barriers such as human factors barriers; values and attitudes are crucial to overcome in order to develop BMI. However, they can be as hard to overcome as for example institutional barriers such as governments’ regulations. Implications for research and practice are to focus on cognitive barriers and to develop the BMI capability of the owners and managers of agri-industry firms.Keywords: agri-food, barriers, business model, innovation
Procedia PDF Downloads 23275 The Impact of Trade on Stock Market Integration of Emerging Markets
Authors: Anna M. Pretorius
Abstract:
The emerging markets category for portfolio investment was introduced in 1986 in an attempt to promote capital market development in less developed countries. Investors traditionally diversified their portfolios by investing in different developed markets. However, high growth opportunities forced investors to consider emerging markets as well. Examples include the rapid growth of the “Asian Tigers” during the 1980s, growth in Latin America during the 1990s and the increased interest in emerging markets during the global financial crisis. As such, portfolio flows to emerging markets have increased substantially. In 2002 7% of all equity allocations from advanced economies went to emerging markets; this increased to 20% in 2012. The stronger links between advanced and emerging markets led to increased synchronization of asset price movements. This increased level of stock market integration for emerging markets is confirmed by various empirical studies. Against the background of increased interest in emerging market assets and the increasing level of integration of emerging markets, this paper focuses on the determinants of stock market integration of emerging market countries. Various studies have linked the level of financial market integration with specific economic variables. These variables include: economic growth, local inflation, trade openness, local investment, budget surplus/ deficit, market capitalization, domestic bank credit, domestic institutional and legal environment and world interest rates. The aim of this study is to empirically investigate to what extent trade-related determinants have an impact on stock market integration. The panel data sample include data of 16 emerging market countries: Brazil, Chile, China, Colombia, Czech Republic, Hungary, India, Malaysia, Pakistan, Peru, Philippines, Poland, Russian Federation, South Africa, Thailand and Turkey for the period 1998-2011. The integration variable for each emerging stock market is calculated as the explanatory power of a multi-factor model. These factors are extracted from a large panel of global stock market returns. Trade related explanatory variables include: exports as percentage of GDP, imports as percentage of GDP and total trade as percentage of GDP. Other macroeconomic indicators – such as market capitalisation, the size of the budget deficit and the effectiveness of the regulation of the securities exchange – are included in the regressions as control variables. An initial analysis on a sample of developed stock markets could not identify any significant determinants of stock market integration. Thus the macroeconomic variables identified in the literature are much more significant in explaining stock market integration of emerging markets than stock market integration of developed markets. The three trade variables are all statistically significant at a 5% level. The market capitalisation variable is also significant while the regulation variable is only marginally significant. The global financial crisis has highlighted the urgency to better understand the link between the financial and real sectors of the economy. This paper comes to the important finding that, apart from the level of market capitalisation (as financial indicator), trade (representative of the real economy) is a significant determinant of stock market integration of countries not yet classified as developed economies.Keywords: emerging markets, financial market integration, panel data, trade
Procedia PDF Downloads 30674 The Role of Learning in Stimulation Policies to Increase Participation in Lifelong Development: A Government Policy Analysis
Authors: Björn de Kruijf, Arjen Edzes, Sietske Waslander
Abstract:
In an ever-quickly changing society, lifelong development is seen as a solution to labor market problems by politicians and policymakers. In this paper, we investigate how policy instruments are used to increase participation in lifelong development and on which behavioral principles policy is based. Digitization, automation, and an aging population change society and the labor market accordingly. Skills that were once most sought after in the workforce can become abundantly present. For people to remain relevant in the working population, they need to continue adapting new skills useful in the current labor market. Many reports have been written that focus on the role of lifelong development in this changing society and how lifelong development can help keep people adapt and stay relevant. Inspired by these reports, governments have implemented a broad range of policies to support participation in lifelong development. The question we ask ourselves is how government policies promote participation in lifelong development. This stems from a complex interplay of policy instruments and learning. Regulation, economic and soft instruments can be combined to promote lifelong development, and different types of education further complex policies on lifelong development. Literature suggests that different stages in people’s lives might warrant different methods of learning. Governments could anticipate this in their policies. In order to influence people’s behavior, the government can tap into a broad range of sociological, psychological, and (behavioral) economic principles. The traditional economic assumption that behavior is rational is known to be only partially true, and the government can use many biases in human behavior to stimulate participation in lifelong development. In this paper, we also try to find which biases the government taps into to promote participation if they tap into any of these biases. The goal of this paper is to analyze government policies intended to promote participation in lifelong development. To do this, we develop a framework to analyze the policies on lifelong development. We specifically incorporate the role of learning and the behavioral principles underlying policy instruments in the framework. We apply this framework to the case of the Netherlands, where we examine a set of policy documents. We single out the policies the government has put in place and how they are vertically and horizontally related. Afterward, we apply the framework and classify the individual policies by policy instrument and by type of learning. We find that the Dutch government focuses on formal and non-formal learning in their policy instruments. However, the literature suggests that learning at a later age is mainly done in an informal manner through experiences.Keywords: learning, lifelong development, policy analysis, policy instruments
Procedia PDF Downloads 8273 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions
Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa
Abstract:
The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario
Procedia PDF Downloads 11572 Video Club as a Pedagogical Tool to Shift Teachers’ Image of the Child
Authors: Allison Tucker, Carolyn Clarke, Erin Keith
Abstract:
Introduction: In education, the determination to uncover privileged practices requires critical reflection to be placed at the center of both pre-service and in-service teacher education. Confronting deficit thinking about children’s abilities and shifting to holding an image of the child as capable and competent is necessary for teachers to engage in responsive pedagogy that meets children where they are in their learning and builds on strengths. This paper explores the ways in which early elementary teachers' perceptions of the assets of children might shift through the pedagogical use of video clubs. Video club is a pedagogical practice whereby teachers record and view short videos with the intended purpose of deepening their practices. The use of video club as a learning tool has been an extensively documented practice. In this study, a video club is used to watch short recordings of playing children to identify the assets of their students. Methodology: The study on which this paper is based asks the question: What are the ways in which teachers’ image of the child and teaching practices evolve through the use of video club focused on the strengths of children demonstrated during play? Using critical reflection, it aims to identify and describe participants’ experiences of examining their personally held image of the child through the pedagogical tool video club, and how that image influences their practices, specifically in implementing play pedagogy. Teachers enrolled in a graduate-level play pedagogy course record and watch videos of their own students as a means to notice and reflect on the learning that happens during play. Using a co-constructed viewing protocol, teachers identify student strengths and consider their pedagogical responses. Video club provides a framework for teachers to critically reflect in action, return to the video to rewatch the children or themselves and discuss their noticings with colleagues. Critical reflection occurs when there is focused attention on identifying the ways in which actions perpetuate or challenge issues of inherent power in education. When the image of the child held by the teacher is from a deficit position and is influenced by hegemonic dimensions of practice, critical reflection is essential in naming and addressing power imbalances, biases, and practices that are harmful to children and become barriers to their thriving. The data is comprised of teacher reflections, analyzed using phenomenology. Phenomenology seeks to understand and appreciate how individuals make sense of their experiences. Teacher reflections are individually read, and researchers determine pools of meaning. Categories are identified by each researcher, after which commonalities are named through a recursive process of returning to the data until no more themes emerge or saturation is reached. Findings: The final analysis and interpretation of the data are forthcoming. However, emergent analysis of the data collected using teacher reflections reveals the ways in which the use of video club grew teachers’ awareness of their image of the child. It shows video club as a promising pedagogical tool when used with in-service teachers to prompt opportunities for play and to challenge deficit thinking about children and their abilities to thrive in learning.Keywords: asset-based teaching, critical reflection, image of the child, video club
Procedia PDF Downloads 10571 Assessing the Effectiveness of Warehousing Facility Management: The Case of Mantrac Ghana Limited
Authors: Kuhorfah Emmanuel Mawuli
Abstract:
Generally, for firms to enhance their operational efficiency of logistics, it is imperative to assess the logistics function. The cost of logistics conventionally represents a key consideration in the pricing decisions of firms, which suggests that cost efficiency in logistics can go a long way to improve margins. Warehousing, which is a key part of logistics operations, has the prospect of influencing operational efficiency in logistics management as well as customer value, but this potential has often not been recognized. It has been found that there is a paucity of research that evaluates the efficiency of warehouses. Indeed, limited research has been conducted to examine potential barriers to effective warehousing management. Due to this paucity of research, there is limited knowledge on how to address the obstacles associated with warehousing management. In order for warehousing management to become profitable, there is the need to integrate, balance, and manage the economic inputs and outputs of the entire warehouse operations, something that many firms tend to ignore. Management of warehousing is not solely related to storage functions. Instead, effective warehousing management requires such practices as maximum possible mechanization and automation of operations, optimal use of space and capacity of storage facilities, organization through "continuous flow" of goods, a planned system of storage operations, and safety of goods. For example, there is an important need for space utilization of the warehouse surface as it is a good way to evaluate the storing operation and pick items per hour. In the setting of Mantrac Ghana, not much knowledge regarding the management of the warehouses exists. The researcher has personally observed many gaps in the management of the warehouse facilities in the case organization Mantrac Ghana. It is important, therefore, to assess the warehouse facility management of the case company with the objective of identifying weaknesses for improvement. The study employs an in-depth qualitative research approach using interviews as a mode of data collection. Respondents in the study mainly comprised warehouse facility managers in the studied company. A total of 10 participants were selected for the study using a purposive sampling strategy. Results emanating from the study demonstrate limited warehousing effectiveness in the case company. Findings further reveal that the major barriers to effective warehousing facility management comprise poor layout, poor picking optimization, labour costs, and inaccurate orders; policy implications of the study findings are finally outlined.Keywords: assessing, warehousing, facility, management
Procedia PDF Downloads 6570 Internet of Things-Based Smart Irrigation System
Authors: Ahmed Abdulfatah Yusuf, Collins Oduor Ondiek
Abstract:
The automation of farming activities can have a transformational impact on the agricultural sector, especially from the emerging new technologies such as the Internet of Things (IoT). The system uses water level sensors and soil moisture sensors that measure the content of water in the soil as the values generated from the sensors enable the system to use an appropriate quantity of water, which avoids over or under irrigation. Due to the increase in the world’s population, there is a need to increase food production. With this demand in place, it is difficult to increase crop yield using the traditional manual approaches that lead to the wastage of water, thus affecting crop production. Food insecurity has become a scourge greatly affecting the developing countries and agriculture is an essential part of human life and tends to be the mainstay of the economy in most developing nations. Thus, without the provision of adequate food supplies, the population of those living in poverty is likely to multiply. The project’s main objective is to design and develop an IoT (Internet of Things) microcontroller-based Smart Irrigation System. In addition, the specific research objectives are to find out the challenges with traditional irrigation approaches and to determine the benefits of IoT-based smart irrigation systems. Furthermore, the system includes Arduino, a website and a database that works simultaneously in collecting and storing the data. The system is designed to pave the way in attaining the Sustainable Development Goal (SDG 1), which aims to end extreme poverty in all forms by 2030. The research design aimed at this project is a descriptive research design. Data was gathered through online questionnaires that used both quantitative and qualitative in order to triangulate the data. Out of the 32 questionnaires sent, there were 32 responses leading to a 100% response rate. In terms of sampling, the target group of this project is urban farmers, which account for about 25% of the population of Nairobi. From the findings of the research carried out, it is evident that there is a need to move away from manual irrigation approaches due to the high wastage of water to the use of smart irrigation systems that propose a better way of conserving water while maintaining the quality and moisture of the soil. The research also found out that urban farmers are willing to adopt this system to better their farming practices. However, this system can be improved in the future by incorporating it with other features and deploying it to a larger geographical area.Keywords: crop production, food security, smart irrigation system, sustainable development goal
Procedia PDF Downloads 15169 Measuring Green Growth Indicators: Implication for Policy
Authors: Hanee Ryu
Abstract:
The former president Lee Myung-bak's administration of Korea presented “green growth” as a catchphrase from 2008. He declared “low-carbon, green growth” the nation's vision for the next decade according to United Nation Framework on Climate Change. The government designed omnidirectional policy for low-carbon and green growth with concentrating all effort of departments. The structural change was expected because this slogan is the identity of the government, which is strongly driven with the whole department. After his administration ends, the purpose of this paper is to quantify the policy effect and to compare with the value of the other OECD countries. The major target values under direct policy objectives were suggested, but it could not capture the entire landscape on which the policy makes changes. This paper figures out the policy impacts through comparing the value of ex-ante between the one of ex-post. Furthermore, each index level of Korea’s low-carbon and green growth comparing with the value of the other OECD countries. To measure the policy effect, indicators international organizations have developed are considered. Environmental Sustainable Index (ESI) and Environmental Performance Index (EPI) have been developed by Yale University’s Center for Environmental Law and Policy and Columbia University’s Center for International Earth Science Information Network in collaboration with the World Economic Forum and Joint Research Center of European Commission. It has been widely used to assess the level of natural resource endowments, pollution level, environmental management efforts and society’s capacity to improve its environmental performance over time. Recently OCED publish the Green Growth Indicator for monitoring progress towards green growth based on internationally comparable data. They build up the conceptual framework and select indicators according to well specified criteria: economic activities, natural asset base, environmental dimension of quality of life and economic opportunities and policy response. It considers the socio-economic context and reflects the characteristic of growth. Some selected indicators are used for measuring the level of changes the green growth policies have induced in this paper. As results, the CO2 productivity and energy productivity show trends of declination. It means that policy intended industry structure shift for achieving carbon emission target affects weakly in the short-term. Increasing green technologies patents might result from the investment of previous period. The increasing of official development aids which can be immediately embarked by political decision with no time lag present only in 2008-2009. It means international collaboration and investment to developing countries via ODA has not succeeded since the initial stage of his administration. The green growth framework makes the public expect structural change, but it shows sporadic effect. It needs organization to manage it in terms of the long-range perspectives. Energy, climate change and green growth are not the issue to be handled in the one period of the administration. The policy mechanism to transfer cost problem to value creation should be developed consistently.Keywords: comparing ex-ante between ex-post indicator, green growth indicator, implication for green growth policy, measuring policy effect
Procedia PDF Downloads 44868 Neurodiversity in Post Graduate Medical Education: A Rapid Solution to Faculty Development
Authors: Sana Fatima, Paul Sadler, Jon Cooper, David Mendel, Ayesha Jameel
Abstract:
Background: Neurodiversity refers to intrinsic differences between human minds and encompasses dyspraxia, dyslexia, attention deficit hyperactivity disorder, dyscalculia, autism spectrum disorder, and Tourette syndrome. There is increasing recognition of neurodiversity in relation to disability/diversity in medical education and the associated impact on training, career progression, and personal and professional wellbeing. In addition, documented and anecdotal evidence suggests that medical educators and training providers in all four nations (UK) are increasingly concerned about understanding neurodiversity and identifying and providing support for neurodivergent trainees. Summary of Work: A national Neurodiversity Task and Finish group were established to survey Health Education England local office Professional Support teams about insights into infrastructure, training for educators, triggers for assessment, resources, and intervention protocols. This group drew from educational leadership, professional and personal neurodiverse expertise, occupational medicine, employer human resource, and trainees. An online, exploratory survey was conducted to gather insights from supervisors and trainers across England using the Professional Support Units' platform. Summary of Results: This survey highlighted marked heterogeneity in the identification, assessment, and approaches to support and management of neurodivergent trainees and highlighted a 'deficit' approach to neurodiversity. It also demonstrated a paucity of educational and protocol resources for educators and supervisors in supporting neurodivergent trainees. Discussions and Conclusions: In phase one, we focused on faculty development. An educational repository for all supervising trainees using a thematic approach was formalised. This was guided by our survey findings specific for neurodiversity and took a triple 'A' approach: awareness, assessment, and action. This is further supported by video material incorporating stories in training as well as mobile workshops for trainers for more immersive learning. The subtle theme from both the survey and Task and finish group suggested a move away from deficit-focused methods toward a positive holistic, interdisciplinary approach within a biopsychosocial framework. Contributions: 1. Faculty Knowledge and basic understanding of neurodiversity are key to supporting trainees with known or underlying Neurodiverse conditions. This is further complicated by challenges around non-disclosure, varied presentations, stigma, and intersectionality. 2. There is national (and international) inconsistency in the approach to how trainees are managed once a neurodiverse condition is suspected or diagnosed. 3. A carefully constituted and focussed Task and Finish group can rapidly identify national inconsistencies in neurodiversity and implement rapid educational interventions. 4. Nuanced findings from surveys and discussion can reframe the approach to neurodiversity; from a medical model to a more comprehensive, asset-based, biopsychosocial model of support, fostering a cultural shift, accepting 'diversity' in all its manifestations, visible and hidden.Keywords: neurodiversity, professional support, human considerations, workplace wellbeing
Procedia PDF Downloads 9167 The Quantum Theory of Music and Human Languages
Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer
Abstract:
The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: language, music, sciences, quantum entenglement
Procedia PDF Downloads 7766 Building on Previous Microvalving Approaches for Highly Reliable Actuation in Centrifugal Microfluidic Platforms
Authors: Ivan Maguire, Ciprian Briciu, Alan Barrett, Dara Kervick, Jens Ducrèe, Fiona Regan
Abstract:
With the ever-increasing myriad of applications of which microfluidic devices are capable, reliable fluidic actuation development has remained fundamental to the success of these microfluidic platforms. There are a number of approaches which can be taken in order to integrate liquid actuation on microfluidic platforms, which can usually be split into two primary categories; active microvalves and passive microvalves. Active microvalves are microfluidic valves which require a physical parameter change by external, or separate interaction, for actuation to occur. Passive microvalves are microfluidic valves which don’t require external interaction for actuation due to the valve’s natural physical parameters, which can be overcome through sample interaction. The purpose of this paper is to illustrate how further improvements to past microvalve solutions can largely enhance systematic reliability and performance, with both novel active and passive microvalves demonstrated. Covered within this scope will be two alternative and novel microvalve solutions for centrifugal microfluidic platforms; a revamped pneumatic-dissolvable film active microvalve (PAM) strategy and a spray-on Sol-Gel based hydrophobic passive microvalve (HPM) approach. Both the PAM and the HPM mechanisms were demonstrated on a centrifugal microfluidic platform consisting of alternating layers of 1.5 mm poly(methyl methacrylate) (PMMA) (for reagent storage) sheets and ~150 μm pressure sensitive adhesive (PSA) (for microchannel fabrication) sheets. The PAM approach differs from previous SOLUBON™ dissolvable film methods by introducing a more reliable and predictable liquid delivery mechanism to microvalve site, thus significantly reducing premature activation. This approach has also shown excellent synchronicity when performed in a multiplexed form. The HPM method utilises a new spray-on and low curing temperature (70°C) sol-gel material. The resultant double layer coating comprises a PMMA adherent sol-gel as the bottom layer and an ultra hydrophobic silica nano-particles (SNPs) film as the top layer. The optimal coating was integrated to microfluidic channels with varying cross-sectional area for assessing microvalve burst frequencies consistency. It is hoped that these microvalving solutions, which can be easily added to centrifugal microfluidic platforms, will significantly improve automation reliability.Keywords: centrifugal microfluidics, hydrophobic microvalves, lab-on-a-disc, pneumatic microvalves
Procedia PDF Downloads 18865 Technology for Good: Deploying Artificial Intelligence to Analyze Participant Response to Anti-Trafficking Education
Authors: Ray Bryant
Abstract:
3Strands Global Foundation (3SGF), a non-profit with a mission to mobilize communities to combat human trafficking through prevention education and reintegration programs, launched a groundbreaking study that calls out the usage and benefits of artificial intelligence in the war against human trafficking. Having gathered more than 30,000 stories from counselors and school staff who have gone through its PROTECT Prevention Education program, 3SGF sought to develop a methodology to measure the effectiveness of the training, which helps educators and school staff identify physical signs and behaviors indicating a student is being victimized. The program further illustrates how to recognize and respond to trauma and teaches the steps to take to report human trafficking, as well as how to connect victims with the proper professionals. 3SGF partnered with Levity, a leader in no-code Artificial Intelligence (AI) automation, to create the research study utilizing natural language processing, a branch of artificial intelligence, to measure the effectiveness of their prevention education program. By applying the logic created for the study, the platform analyzed and categorized each story. If the story, directly from the educator, demonstrated one or more of the desired outcomes; Increased Awareness, Increased Knowledge, or Intended Behavior Change, a label was applied. The system then added a confidence level for each identified label. The study results were generated with a 99% confidence level. Preliminary results show that of the 30,000 stories gathered, it became overwhelmingly clear that a significant majority of the participants now have increased awareness of the issue, demonstrated better knowledge of how to help prevent the crime, and expressed an intention to change how they approach what they do daily. In addition, it was observed that approximately 30% of the stories involved comments by educators expressing they wish they’d had this knowledge sooner as they can think of many students they would have been able to help. Objectives Of Research: To solve the problem of needing to analyze and accurately categorize more than 30,000 data points of participant feedback in order to evaluate the success of a human trafficking prevention program by using AI and Natural Language Processing. Methodologies Used: In conjunction with our strategic partner, Levity, we have created our own NLP analysis engine specific to our problem. Contributions To Research: The intersection of AI and human rights and how to utilize technology to combat human trafficking.Keywords: AI, technology, human trafficking, prevention
Procedia PDF Downloads 5964 Exploration of Building Information Modelling Software to Develop Modular Coordination Design Tool for Architects
Authors: Muhammad Khairi bin Sulaiman
Abstract:
The utilization of Building Information Modelling (BIM) in the construction industry has provided an opportunity for designers in the Architecture, Engineering and Construction (AEC) industry to proceed from the conventional method of using manual drafting to a way that creates alternative designs quickly, produces more accurate, reliable and consistent outputs. By using BIM Software, designers can create digital content that manipulates the use of data using the parametric model of BIM. With BIM software, more alternative designs can be created quickly and design problems can be explored further to produce a better design faster than conventional design methods. Generally, BIM is used as a documentation mechanism and has not been fully explored and utilised its capabilities as a design tool. Relative to the current issue, Modular Coordination (MC) design as a sustainable design practice is encouraged since MC design will reduce material wastage through standard dimensioning, pre-fabrication, repetitive, modular construction and components. However, MC design involves a complex process of rules and dimensions. Therefore, a tool is needed to make this process easier. Since the parameters in BIM can easily be manipulated to follow MC rules and dimensioning, thus, the integration of BIM software with MC design is proposed for architects during the design stage. With this tool, there will be an improvement in acceptance and practice in the application of MC design effectively. Consequently, this study will analyse and explore the function and customization of BIM objects and the capability of BIM software to expedite the application of MC design during the design stage for architects. With this application, architects will be able to create building models and locate objects within reference modular grids that adhere to MC rules and dimensions. The parametric modeling capabilities of BIM will also act as a visual tool that will further enhance the automation of the 3-Dimensional space planning modeling process. (Method) The study will first analyze and explore the parametric modeling capabilities of rule-based BIM objects, which eventually customize a reference grid within the rules and dimensioning of MC. Eventually, the approach will further enhance the architect's overall design process and enable architects to automate complex modeling, which was nearly impossible before. A prototype using a residential quarter will be modeled. A set of reference grids guided by specific MC rules and dimensions will be used to develop a variety of space planning and configuration. With the use of the design, the tool will expedite the design process and encourage the use of MC Design in the construction industry.Keywords: building information modeling, modular coordination, space planning, customization, BIM application, MC space planning
Procedia PDF Downloads 8463 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core
Authors: Yashas Bedre Raghavendra, Pim Vullers
Abstract:
This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction
Procedia PDF Downloads 69