Search results for: order picking problem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19433

Search results for: order picking problem

17693 Identification of Promising Infant Clusters to Obtain Improved Block Layout Designs

Authors: Mustahsan Mir, Ahmed Hassanin, Mohammed A. Al-Saleh

Abstract:

The layout optimization of building blocks of unequal areas has applications in many disciplines including VLSI floorplanning, macrocell placement, unequal-area facilities layout optimization, and plant or machine layout design. A number of heuristics and some analytical and hybrid techniques have been published to solve this problem. This paper presents an efficient high-quality building-block layout design technique especially suited for solving large-size problems. The higher efficiency and improved quality of optimized solutions are made possible by introducing the concept of Promising Infant Clusters in a constructive placement procedure. The results presented in the paper demonstrate the improved performance of the presented technique for benchmark problems in comparison with published heuristic, analytic, and hybrid techniques.

Keywords: block layout problem, building-block layout design, CAD, optimization, search techniques

Procedia PDF Downloads 386
17692 Foggy Image Restoration Using Neural Network

Authors: Khader S. Al-Aidmat, Venus W. Samawi

Abstract:

Blurred vision in the misty atmosphere is essential problem which needs to be resolved. To solve this problem, we developed a technique to restore foggy degraded image from its original version using Back-propagation neural network (BP-NN). The suggested technique is based on mapping between foggy scene and its corresponding original scene. Seven different approaches are suggested based on type of features used in image restoration. Features are extracted from spatial and spatial-frequency domain (using DCT). Each of these approaches comes with its own BP-NN architecture depending on type and number of used features. The weight matrix resulted from training each BP-NN represents a fog filter. The performance of these filters are evaluated empirically (using PSNR), and perceptually. By comparing the performance of these filters, the effective features that suits BP-NN technique for restoring foggy images is recognized. This system proved its effectiveness and success in restoring moderate foggy images.

Keywords: artificial neural network, discrete cosine transform, feed forward neural network, foggy image restoration

Procedia PDF Downloads 382
17691 Technologies for Phosphorus Removal from Wastewater: Review

Authors: Thandie Veronicah Sima, Moatlhodi Wiseman Letshwenyo

Abstract:

Discharge of wastewater is one of the major sources of phosphorus entering streams, lakes and other water bodies causing undesired environmental problem such as eutrophication. This condition not only puts the ecosystem at risk but also causes severe economic damages. Stringent laws have been developed globally by different bodies to control the level of phosphorus concentrations into receiving environments. In order to satisfy the constraints, a high degree of tertiary treatment or at least a significant reduction of phosphorus concentration is obligatory. This comprehensive review summarizes phosphorus removal technologies, from the most commonly used conventional technologies such as chemical precipitation through metal addition, membrane filtration, reverse osmosis and enhanced biological phosphorus removal using activated sludge system to passive systems such as constructed wetlands and filtration systems. Trends, perspectives and scientific procedures conducted by different researchers have been presented. This review critically evaluates the advantages and limitations behind each of the technologies. Enhancement of passive systems using reactive media such as industrial wastes to provide additional uptake through adsorption or precipitation is also discussed in this article.

Keywords: adsorption, chemical precipitation, enhanced biological phosphorus removal, phosphorus removal

Procedia PDF Downloads 325
17690 Massively-Parallel Bit-Serial Neural Networks for Fast Epilepsy Diagnosis: A Feasibility Study

Authors: Si Mon Kueh, Tom J. Kazmierski

Abstract:

There are about 1% of the world population suffering from the hidden disability known as epilepsy and major developing countries are not fully equipped to counter this problem. In order to reduce the inconvenience and danger of epilepsy, different methods have been researched by using a artificial neural network (ANN) classification to distinguish epileptic waveforms from normal brain waveforms. This paper outlines the aim of achieving massive ANN parallelization through a dedicated hardware using bit-serial processing. The design of this bit-serial Neural Processing Element (NPE) is presented which implements the functionality of a complete neuron using variable accuracy. The proposed design has been tested taking into consideration non-idealities of a hardware ANN. The NPE consists of a bit-serial multiplier which uses only 16 logic elements on an Altera Cyclone IV FPGA and a bit-serial ALU as well as a look-up table. Arrays of NPEs can be driven by a single controller which executes the neural processing algorithm. In conclusion, the proposed compact NPE design allows the construction of complex hardware ANNs that can be implemented in a portable equipment that suits the needs of a single epileptic patient in his or her daily activities to predict the occurrences of impending tonic conic seizures.

Keywords: Artificial Neural Networks (ANN), bit-serial neural processor, FPGA, Neural Processing Element (NPE)

Procedia PDF Downloads 321
17689 Assessing Online Learning Paths in an Learning Management Systems Using a Data Mining and Machine Learning Approach

Authors: Alvaro Figueira, Bruno Cabral

Abstract:

Nowadays, students are used to be assessed through an online platform. Educators have stepped up from a period in which they endured the transition from paper to digital. The use of a diversified set of question types that range from quizzes to open questions is currently common in most university courses. In many courses, today, the evaluation methodology also fosters the students’ online participation in forums, the download, and upload of modified files, or even the participation in group activities. At the same time, new pedagogy theories that promote the active participation of students in the learning process, and the systematic use of problem-based learning, are being adopted using an eLearning system for that purpose. However, although there can be a lot of feedback from these activities to student’s, usually it is restricted to the assessments of online well-defined tasks. In this article, we propose an automatic system that informs students of abnormal deviations of a 'correct' learning path in the course. Our approach is based on the fact that by obtaining this information earlier in the semester, may provide students and educators an opportunity to resolve an eventual problem regarding the student’s current online actions towards the course. Our goal is to prevent situations that have a significant probability to lead to a poor grade and, eventually, to failing. In the major learning management systems (LMS) currently available, the interaction between the students and the system itself is registered in log files in the form of registers that mark beginning of actions performed by the user. Our proposed system uses that logged information to derive new one: the time each student spends on each activity, the time and order of the resources used by the student and, finally, the online resource usage pattern. Then, using the grades assigned to the students in previous years, we built a learning dataset that is used to feed a machine learning meta classifier. The produced classification model is then used to predict the grades a learning path is heading to, in the current year. Not only this approach serves the teacher, but also the student to receive automatic feedback on her current situation, having past years as a perspective. Our system can be applied to online courses that integrate the use of an online platform that stores user actions in a log file, and that has access to other student’s evaluations. The system is based on a data mining process on the log files and on a self-feedback machine learning algorithm that works paired with the Moodle LMS.

Keywords: data mining, e-learning, grade prediction, machine learning, student learning path

Procedia PDF Downloads 122
17688 Obstacle Classification Method Based on 2D LIDAR Database

Authors: Moohyun Lee, Soojung Hur, Yongwan Park

Abstract:

In this paper is proposed a method uses only LIDAR system to classification an obstacle and determine its type by establishing database for classifying obstacles based on LIDAR. The existing LIDAR system, in determining the recognition of obstruction in an autonomous vehicle, has an advantage in terms of accuracy and shorter recognition time. However, it was difficult to determine the type of obstacle and therefore accurate path planning based on the type of obstacle was not possible. In order to overcome this problem, a method of classifying obstacle type based on existing LIDAR and using the width of obstacle materials was proposed. However, width measurement was not sufficient to improve accuracy. In this research, the width data was used to do the first classification; database for LIDAR intensity data by four major obstacle materials on the road were created; comparison is made to the LIDAR intensity data of actual obstacle materials; and determine the obstacle type by finding the one with highest similarity values. An experiment using an actual autonomous vehicle under real environment shows that data declined in quality in comparison to 3D LIDAR and it was possible to classify obstacle materials using 2D LIDAR.

Keywords: obstacle, classification, database, LIDAR, segmentation, intensity

Procedia PDF Downloads 349
17687 Performance Evaluation of Task Scheduling Algorithm on LCQ Network

Authors: Zaki Ahmad Khan, Jamshed Siddiqui, Abdus Samad

Abstract:

The Scheduling and mapping of tasks on a set of processors is considered as a critical problem in parallel and distributed computing system. This paper deals with the problem of dynamic scheduling on a special type of multiprocessor architecture known as Linear Crossed Cube (LCQ) network. This proposed multiprocessor is a hybrid network which combines the features of both linear type of architectures as well as cube based architectures. Two standard dynamic scheduling schemes namely Minimum Distance Scheduling (MDS) and Two Round Scheduling (TRS) schemes are implemented on the LCQ network. Parallel tasks are mapped and the imbalance of load is evaluated on different set of processors in LCQ network. The simulations results are evaluated and effort is made by means of through analysis of the results to obtain the best solution for the given network in term of load imbalance left and execution time. The other performance matrices like speedup and efficiency are also evaluated with the given dynamic algorithms.

Keywords: dynamic algorithm, load imbalance, mapping, task scheduling

Procedia PDF Downloads 450
17686 Upgrading of Problem-Based Learning with Educational Multimedia to the Undergraduate Students

Authors: Sharifa Alduraibi, Abir El Sadik, Ahmed Elzainy, Alaa Alduraibi, Ahmed Alsolai

Abstract:

Introduction: Problem-based learning (PBL) is an active student-centered educational modality, influenced by the students' interest that required continuous motivation to improve their engagement. The new era of professional information technology facilitated the utilization of educational multimedia, such as videos, soundtracks, and photographs promoting students' learning. The aim of the present study was to introduce multimedia-enriched PBL scenarios for the first time in college of medicine, Qassim University, as an incentive for better students' engagement. In addition, students' performance and satisfaction were evaluated. Methodology: Two multimedia-enhanced PBL scenarios were implemented to the third years' students in the urinary system block. Radiological images, plain CT scan, and X-ray of the abdomen and renal nuclear scan correlated with their pathological gross photographs were added to the scenarios. One week before the first sessions, pre-recorded orientation videos for PBL tutors were submitted to clarify the multimedia incorporated in the scenarios. Other two traditional PBL scenarios devoid of multimedia demonstrating the pathological and radiological findings were designed. Results and Discussion: Comparison between the formative assessments' results by the end of the two PBL modalities was done. It revealed significant increase in students' engagement, critical thinking and practical reasoning skills during the multimedia-enhanced sessions. Students' perception survey showed great satisfaction with the new strategy. Conclusion: It could be concluded from the current work that multimedia created technology-based teaching strategy inspiring the student for self-directed thinking and promoting students' overall achievement.

Keywords: multimedia, pathology and radiology images, problem-based learning, videos

Procedia PDF Downloads 157
17685 Surface Induced Alteration of Nanosized Amorphous Alumina

Authors: A. Katsman, L. Bloch, Y. Etinger, Y. Kauffmann, B. Pokroy

Abstract:

Various nanosized amorphous alumina thin films in the range of (2.4 - 63.1) nm were deposited onto amorphous carbon and amorphous Si3N4 membrane grids. Transmission electron microscopy (TEM), electron energy loss spectroscopy (EELS), X-ray photoelectron spectroscopy (XPS) and differential scanning calorimetry (DSC) techniques were used to probe the size effect on the short range order and the amorphous to crystalline phase transition temperature. It was found that the short-range order changes as a function of size: the fraction of tetrahedral Al sites is greater in thinner amorphous films. This result correlates with the change of amorphous alumina density with the film thickness demonstrated by the reflectivity experiments: the thinner amorphous films have the less density. These effects are discussed in terms of surface reconstruction of the amorphous alumina films. The average atomic binding energy in the thin film layer decreases with decease of the thickness, while the average O-Al interatomic distance increases. The reconstruction of amorphous alumina is induced by the surface reconstruction, and the short range order changes being dependent on the density. Decrease of the surface energy during reconstruction is the driving force of the alumina reconstruction (density change) followed by relaxation process (short range order change). The amorphous to crystalline phase transition temperature measured by DSC rises with the decrease in thickness from 997.6°C for 13.9 nm to 1020.4 °C for 2.7 nm thick. This effect was attributed to the different film densities: formation of nanovoids preceding and accompanying crystallization process influences the crystallization rate, and by these means, the temperature of crystallization peak.

Keywords: amorphous alumina, density, short range order, size effect

Procedia PDF Downloads 466
17684 A New Approach to Achieve the Regime Equations in Sand-Bed Rivers

Authors: Farhad Imanshoar

Abstract:

The regime or equilibrium geometry of alluvial rivers remains a topic of fundamental scientific and engineering interest. There are several approaches to analyze the problem, namely: empirical formulas, semi-theoretical methods and rational (extreme) procedures. However, none of them is widely accepted at present, due to lack of knowledge of some physical processes associated with channel formation and the simplification hypotheses imposed in order to reduce the high quantity of involved variables. The study presented in this paper shows a new approach to estimate stable width and depth of sand-bed rivers by using developed stream power equation (DSPE). At first, a new procedure based on theoretical analysis and by considering DSPE and ultimate sediment concentration were developed. Then, experimental data for regime condition in sand-bed rivers (flow depth, flow width, sediment feed rate for several cases) were gathered. Finally, the results of this research (regime equations) are compared with the field data and other regime equations. A good agreement was observed between the field data and the values resulted from developed regime equation.

Keywords: regime equations, developed stream power equation, sand-bed rivers, semi-theoretical methods

Procedia PDF Downloads 268
17683 3D Interferometric Imaging Using Compressive Hardware Technique

Authors: Mor Diama L. O., Matthieu Davy, Laurent Ferro-Famil

Abstract:

In this article, inverse synthetic aperture radar (ISAR) is combined with compressive imaging techniques in order to perform 3D interferometric imaging. Interferometric ISAR (InISAR) imaging relies on a two-dimensional antenna array providing diversities in the elevation and azimuth directions. However, the signals measured over several antennas must be acquired by coherent receivers resulting in costly and complex hardware. This paper proposes to use a chaotic cavity as a compressive device to encode the signals arising from several antennas into a single output port. These signals are then reconstructed by solving an inverse problem. Our approach is demonstrated experimentally with a 3-elements L-shape array connected to a metallic compressive enclosure. The interferometric phases estimated from a unique broadband signal are used to jointly estimate the target’s effective rotation rate and the height of the dominant scattering centers of our target. Our experimental results show that the use of the compressive device does not adversely affect the performance of our imaging process. This study opens new perspectives to reduce the hardware complexity of high-resolution ISAR systems.

Keywords: interferometric imaging, inverse synthetic aperture radar, compressive device, computational imaging

Procedia PDF Downloads 160
17682 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 297
17681 Numerical Solution of a Mathematical Model of Vortex Using Projection Method: Applications to Tornado Dynamics

Authors: Jagdish Prasad Maurya, Sanjay Kumar Pandey

Abstract:

Inadequate understanding of the complex nature of flow features in tornado vortex is a major problem in modelling tornadoes. Tornadoes are violent atmospheric phenomenon that appear all over the world. Modelling tornadoes aim to reduce the loss of the human lives and material damage caused by the tornadoes. Dynamics of tornado is investigated by a numerical technique, the improved version of the projection method. In this paper, authors solve the problem for axisymmetric tornado vortex by the said method that uses a finite difference approach for getting an accurate and stable solution. The conclusions drawn are that large radial inflow velocity occurs near the ground that leads to increase the tangential velocity. The increased velocity phenomenon occurs close to the boundary and absolute maximum wind is obtained near the vortex core. The results validate previous numerical and theoretical models.

Keywords: computational fluid dynamics, mathematical model, Navier-Stokes equations, tornado

Procedia PDF Downloads 353
17680 Mythical Geography, Collective Imaginary and Spiritual Patrimony in the Romanian Carpathians: A Tourist Image Component

Authors: Cosmin-Gabriel Porumb-Ghiurco, Dumitrana Fiț-Iordache, Szőke Árpád

Abstract:

The literature incorporating geographical or tourist-geographical themes and explicit references to the Carpathian area is extremely abundant. Through this paper, we attempt to “undermine” the traditional, tourist-geographical approaches of the Carpathian Arch by targeting an aspect often regarded as marginal but which, if examined, even only empirically, takes the form of a vast problem with multidisciplinary vocation. Therefore, we propose a more extravagant yet pro-touristic approach to the Romanian Carpathian geo-space. Consequently, the explicit goal of this approach consists precisely in broadening the multidisciplinary, essentially geographic scope of the research, the vision and mental representation of the Carpathian area by advancing a lever that would symbolize a different kind of unification between geography and tourism on a more intimate, subtle, mythological and archetypal level. The spiritual and mercantile dimensions of the tourism field in general and of the local Carpathian tourism can meld harmoniously together in order to create a common territorial reality of referral and favorable perspectives for the consolidation of their symbiotic relationship.

Keywords: tourist image, mythical geography, collective imaginary, spiritual patrimony, Carpathians

Procedia PDF Downloads 92
17679 The Convergence of IoT and Machine Learning: A Survey of Real-time Stress Detection System

Authors: Shreyas Gambhirrao, Aditya Vichare, Aniket Tembhurne, Shahuraj Bhosale

Abstract:

In today's rapidly evolving environment, stress has emerged as a significant health concern across different age groups. Stress that isn't controlled, whether it comes from job responsibilities, health issues, or the never-ending news cycle, can have a negative effect on our well-being. The problem is further aggravated by the ongoing connection to technology. In this high-tech age, identifying and controlling stress is vital. In order to solve this health issue, the study focuses on three key metrics for stress detection: body temperature, heart rate, and galvanic skin response (GSR). These parameters along with the Support Vector Machine classifier assist the system to categorize stress into three groups: 1) Stressed, 2) Not stressed, and 3) Moderate stress. Proposed training model, a NodeMCU combined with particular sensors collects data in real-time and rapidly categorizes individuals based on their stress levels. Real-time stress detection is made possible by this creative combination of hardware and software.

Keywords: real time stress detection, NodeMCU, sensors, heart-rate, body temperature, galvanic skin response (GSR), support vector machine

Procedia PDF Downloads 72
17678 Assessment of Low Income Housing Delivery, Accessibility and Affordability Problem in Nigeria

Authors: Asimiyu Mohammed Jinadu

Abstract:

Housing is a basic necessity of life. Housing plays a central role in the life of living organisms as it provides the basic platform for the life support systems in human settlements. It is considered a social service and a basic right. Despite the importance of housing, Nigeria as a nation is faced with the problem of quantitative and qualitative shortfall in the number of housing units required to accommodate the citizens. This study examined the accessibility and affordability problems of low-income housing in Nigeria. It relied on secondary data obtained for the records of government ministries and agencies. Descriptive statistics were used in the analysis, and the information was presented in simple tables and charts. The findings show that over the years the government has provided serviced plots of land, owner occupier houses and mortgage loans for the people. As at 2016, the Federal Housing Authority (FHA) has completed a total of 23,038 housing units while another 14, 488 units were on-going under the Public Private Partnership scheme across the country. The study revealed that a total of 910, 671 housing units were proposed by the Government under the various low-income housing programmes between 1960 and 2017, but only 156, 336 units were delivered within the period, representing 17.17% success rate. Amongst others, the low-income group faced the problems of low access to and unaffordability of the few low-income housing delivered in Nigeria. The study recommended that all abandoned housing projects should be reviewed, rationalized, completed and made available to the targeted low-income people. Investment in micro housing finance, design and implementation of pro-poor housing programme and massive investment in innovative slum upgrading programmes by both the government and private sector are also recommended to ameliorate the housing problems of the low-income group in Nigeria.

Keywords: housing, low income group, problem, programme

Procedia PDF Downloads 256
17677 Automatic Diagnosis of Electrical Equipment Using Infrared Thermography

Authors: Y. Laib Dit Leksir, S. Bouhouche

Abstract:

Analysis and processing of data bases resulting from infrared thermal measurements made on the electrical installation requires the development of new tools in order to obtain correct and additional information to the visual inspections. Consequently, the methods based on the capture of infrared digital images show a great potential and are employed increasingly in various fields. Although, there is an enormous need for the development of effective techniques to analyse these data base in order to extract relevant information relating to the state of the equipments. Our goal consists in introducing recent techniques of modeling based on new methods, image and signal processing to develop mathematical models in this field. The aim of this work is to capture the anomalies existing in electrical equipments during an inspection of some machines using A40 Flir camera. After, we use binarisation techniques in order to select the region of interest and we make comparison between these methods of thermal images obtained to choose the best one.

Keywords: infrared thermography, defect detection, troubleshooting, electrical equipment

Procedia PDF Downloads 476
17676 Supersymmetry versus Compositeness: 2-Higgs Doublet Models Tell the Story

Authors: S. De Curtis, L. Delle Rose, S. Moretti, K. Yagyu

Abstract:

Supersymmetry and compositeness are the two prevalent paradigms providing both a solution to the hierarchy problem and motivation for a light Higgs boson state. An open door towards the solution is found in the context of 2-Higgs Doublet Models (2HDMs), which are necessary to supersymmetry and natural within compositeness in order to enable Electro-Weak Symmetry Breaking. In scenarios of compositeness, the two isospin doublets arise as pseudo Nambu-Goldstone bosons from the breaking of SO(6). By calculating the Higgs potential at one-loop level through the Coleman-Weinberg mechanism from the explicit breaking of the global symmetry induced by the partial compositeness of fermions and gauge bosons, we derive the phenomenological properties of the Higgs states and highlight the main signatures of this Composite 2-Higgs Doublet Model at the Large Hadron Collider. These include modifications to the SM-like Higgs couplings as well as production and decay channels of heavier Higgs bosons. We contrast the properties of this composite scenario to the well-known ones established in supersymmetry, with the MSSM being the most notorious example. We show how 2HDM spectra of masses and couplings accessible at the Large Hadron Collider may allow one to distinguish between the two paradigms.

Keywords: beyond the standard model, composite Higgs, supersymmetry, Two-Higgs Doublet Model

Procedia PDF Downloads 126
17675 A New Approach for Solving Fractional Coupled Pdes

Authors: Prashant Pandey

Abstract:

In the present article, an effective Laguerre collocation method is used to obtain the approximate solution of a system of coupled fractional-order non-linear reaction-advection-diffusion equation with prescribed initial and boundary conditions. In the proposed scheme, Laguerre polynomials are used together with an operational matrix and collocation method to obtain approximate solutions of the coupled system, so that our proposed model is converted into a system of algebraic equations which can be solved employing the Newton method. The solution profiles of the coupled system are presented graphically for different particular cases. The salient feature of the present article is finding the stability analysis of the proposed method and also the demonstration of the lower variation of solute concentrations with respect to the column length in the fractional-order system compared to the integer-order system. To show the higher efficiency, reliability, and accuracy of the proposed scheme, a comparison between the numerical results of Burger’s coupled system and its existing analytical result is reported. There are high compatibility and consistency between the approximate solution and its exact solution to a higher order of accuracy. The exhibition of error analysis for each case through tables and graphs confirms the super-linearly convergence rate of the proposed method.

Keywords: fractional coupled PDE, stability and convergence analysis, diffusion equation, Laguerre polynomials, spectral method

Procedia PDF Downloads 145
17674 The Applications and Effects of the Career Courses of Taiwanese College Students with LEGO® SERIOUS PLAY®

Authors: Payling Harn

Abstract:

LEGO® SERIOUS PLAY® is a kind of facilitated workshop of thinking and problem-solving approach. Participants built symbolic and metaphorical brick models in response to tasks given by the facilitator and presented these models to other participants. LEGO® SERIOUS PLAY® applied the positive psychological mechanism of Flow and positive emotions to help participants perceiving self-experience and unknown fact and increasing the happiness of life by building bricks and narrating story. At present, LEGO® SERIOUS PLAY® is often utilized for facilitating professional identity and strategy development to assist workers in career development. The researcher desires to apply LEGO® SERIOUS PLAY® to the career courses of college students in order to promote their career ability. This study aimed to use the facilitative method of LEGO® SERIOUS PLAY® to develop the career courses of college students, then explore the effects of Taiwanese college students' positive and negative emotions, career adaptabilities, and career sense of hope by LEGO® SERIOUS PLAY® career courses. The researcher regarded strength as the core concept and use the facilitative mode of LEGO® SERIOUS PLAY® to develop the 8 weeks’ career courses, which including ‘emotion of college life’ ‘career highlights’, ‘career strengths’, ‘professional identity’, ‘business model’, ‘career coping’, ‘strength guiding principles’, ‘career visions’,’ career hope’, etc. The researcher will adopt problem-oriented teaching method to give tasks which according to the weekly theme, use the facilitative mode of LEGO® SERIOUS PLAY® to guide participants to respond tasks by building bricks. Then participants will conduct group discussions, reports, and writing reflection journals weekly. Participants will be 24 second-grade college students. They will attend LEGO® SERIOUS PLAY® career courses for 2 hours a week. The researcher used’ ‘Career Adaptability Scale’ and ‘Career Hope Scale’ to conduct pre-test and post-test. The time points of implementation testing will be one week before courses starting, one day after courses ending respectively. Then the researcher will adopt repeated measures one-way ANOVA for analyzing data. The results revealed that the participants significantly presented immediate positive effect in career adaptability and career hope. The researcher hopes to construct the mode of LEGO® SERIOUS PLAY® career courses by this study and to make a substantial contribution to the future career teaching and researches of LEGO® SERIOUS PLAY®.

Keywords: LEGO® SERIOUS PLAY®, career courses, strength, positive and negative affect, career hope

Procedia PDF Downloads 253
17673 Optimal Investment and Consumption Decision for an Investor with Ornstein-Uhlenbeck Stochastic Interest Rate Model through Utility Maximization

Authors: Silas A. Ihedioha

Abstract:

In this work; it is considered that an investor’s portfolio is comprised of two assets; a risky stock which price process is driven by the geometric Brownian motion and a risk-free asset with Ornstein-Uhlenbeck Stochastic interest rate of return, where consumption, taxes, transaction costs and dividends are involved. This paper aimed at the optimization of the investor’s expected utility of consumption and terminal return on his investment at the terminal time having power utility preference. Using dynamic optimization procedure of maximum principle, a second order nonlinear partial differential equation (PDE) (the Hamilton-Jacobi-Bellman equation HJB) was obtained from which an ordinary differential equation (ODE) obtained via elimination of variables. The solution to the ODE gave the closed form solution of the investor’s problem. It was found the optimal investment in the risky asset is horizon dependent and a ratio of the total amount available for investment and the relative risk aversion coefficient.

Keywords: optimal, investment, Ornstein-Uhlenbeck, utility maximization, stochastic interest rate, maximum principle

Procedia PDF Downloads 225
17672 Personalized Email Marketing Strategy: A Reinforcement Learning Approach

Authors: Lei Zhang, Tingting Xu, Jun He, Zhenyu Yan

Abstract:

Email marketing is one of the most important segments of online marketing. It has been proved to be the most effective way to acquire and retain customers. The email content is vital to customers. Different customers may have different familiarity with a product, so a successful marketing strategy must personalize email content based on individual customers’ product affinity. In this study, we build our personalized email marketing strategy with three types of emails: nurture, promotion, and conversion. Each type of email has a different influence on customers. We investigate this difference by analyzing customers’ open rates, click rates and opt-out rates. Feature importance from response models is also analyzed. The goal of the marketing strategy is to improve the click rate on conversion-type emails. To build the personalized strategy, we formulate the problem as a reinforcement learning problem and adopt a Q-learning algorithm with variations. The simulation results show that our model-based strategy outperforms the current marketer’s strategy.

Keywords: email marketing, email content, reinforcement learning, machine learning, Q-learning

Procedia PDF Downloads 194
17671 Modulation of the Interphase in a Glass Epoxy System: Influence of the Sizing Chemistry on Adhesion and Interfacial Properties

Authors: S. Assengone Otogo Be, A. Fahs, L. Belec, T. A. Nguyen Tien, G. Louarn, J-F. Chailan

Abstract:

Glass fiber-reinforced composite materials have gradually developed in all sectors ranging from consumer products to aerospace applications. However, the weak point is most often the fiber/matrix interface, which can reduce the durability of the composite material. To solve this problem, it is essential to control the interphase and improve our understanding of the adhesion mechanism at the fibre/matrix interface. The interphase properties depend on the nature of the sizing applied on the surface of the glass fibers during their manufacture in order to protect them, facilitate their handling, and ensure fibre/matrix adhesion. The sizing composition, and in particular the nature of the coupling agent and the film-former affects the mechanical properties and the durability of composites. The aim of our study is, therefore, to develop and study composite materials with simplified sizing systems in order to understand how the main constituents modify the mechanical properties and the durability of composites from the nanometric to the macroscopic scale. Two model systems were elaborated: an epoxy matrix reinforced with simplified-sized glass fibres and an epoxy coating applied on glass substrates treated with the same sizings as fibres. For the sizing composition, two configurations were chosen. The first configuration possesses a chemical reactivity to link the glass and the matrix, and the second sizing contains non-reactive agents. The chemistry of the sized glass substrates and fibers was analyzed by FT-IR and XPS spectroscopies. The surface morphology was characterized by SEM and AFM microscopies. The observation of the surface samples reveals the presence of sizings which morphology depends on their chemistry. The evaluation of adhesion of coated substrates and composite materials show good interfacial properties for the reactive configuration. However, the non-reactive configuration exhibits an adhesive rupture at the interface of glass/epoxy for both systems. The interfaces and interphases between the matrix and the substrates are characterized at different scales. Correlations are made between the initial properties of the sizings and the mechanical performances of the model composites.

Keywords: adhesion, interface, interphase, materials composite, simplified sizing systems, surface properties

Procedia PDF Downloads 141
17670 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network

Authors: Ziying Wu, Danfeng Yan

Abstract:

Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.

Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network

Procedia PDF Downloads 118
17669 Parameters Optimization of the Laminated Composite Plate for Sound Transmission Problem

Authors: Yu T. Tsai, Jin H. Huang

Abstract:

In this paper, the specific sound transmission loss (TL) of the laminated composite plate (LCP) with different material properties in each layer is investigated. The numerical method to obtain the TL of the LCP is proposed by using elastic plate theory. The transfer matrix approach is novelty presented for computational efficiency in solving the numerous layers of dynamic stiffness matrix (D-matrix) of the LCP. Besides the numerical simulations for calculating the TL of the LCP, the material properties inverse method is presented for the design of a laminated composite plate analogous to a metallic plate with a specified TL. As a result, it demonstrates that the proposed computational algorithm exhibits high efficiency with a small number of iterations for achieving the goal. This method can be effectively employed to design and develop tailor-made materials for various applications.

Keywords: sound transmission loss, laminated composite plate, transfer matrix approach, inverse problem, elastic plate theory, material properties

Procedia PDF Downloads 388
17668 Legal Disputes of Disclosure and Transparency under Kuwaiti Capital Market Authority Law

Authors: Mohammad A. R. S. Almutairi

Abstract:

This study will provide the introduction that constitutes the problem cornerstone of legal disputes of disclosure and transparency under Kuwaiti Capital market authority Law No. 7 of 2010. It also will discuss the reasons for the emergence of corporate governance and its purposes in the Capital Market Authority Law in Kuwait. In addition, it will show the legal disputes resulting from the unclear concept of disclosure and interest and will discuss the main reasons in support of the possible solution. In addition, this study will argue why the Capital Market Authority Law in Kuwait needs a clear concept and a straight structure of disclosure under section 100. This study will demonstrate why a clear disclosure is led to a better application of the law. This study will demonstrate the fairness in applying the law regarding the punishment against individual, companies and securities market. Furthermore, it will discuss added confidence between investors and the stock market with a clear concept under section 100. Finally, it will summarize arises problem and possible solution.

Keywords: corporate governors, disclosure, transparency, fairness

Procedia PDF Downloads 139
17667 Chemical Stability of Ceramic Crucibles to Molten Titanium

Authors: Jong-Min Park, Hyung-Ki Park, Seok Hong Min, Tae Kwon Ha

Abstract:

Titanium is widely used due to its high specific strength, good biocompatibility, and excellent corrosion resistance. In order to produce titanium powders, it is necessary to melt titanium, and generally it is conducted by an induction heating method using Al₂O₃ ceramic crucible. However, since titanium reacts chemically with Al₂O₃, it is difficult to melt titanium by the induction heating method using Al₂O₃ crucible. To avoid this problem, we studied the chemical stability of the various crucibles such as Al₂O₃, MgO, ZrO₂, and Y₂O₃ crucibles to molten titanium. After titanium lumps (Grade 2, O(oxygen)<0.25wt%) were placed in each crucible, they were heated to 1800℃ with a heating rate of 5 ℃/min, held at 1800℃ for 30 min, and finally cooled to room temperature with a cooling rate of 5 ℃/min. All heat treatments were carried out in high purity Ar atmosphere. To evaluate the chemical stability, thermodynamic data such as Ellingham diagram were utilized, and also Vickers hardness test, microstructure analysis, and EPMA quantitative analysis were performed. As a result, Al₂O₃, MgO and ZrO₂ crucibles chemically reacted with molten titanium, but Y₂O₃ crucible rarely reacted with it.

Keywords: titanium, induction melting, crucible, chemical stability

Procedia PDF Downloads 301
17666 Parametric Study of the Structures: Influence of the Shells

Authors: Serikma Mourad, Mezidi Amar

Abstract:

The conception (design) of an earthquake-resistant structure is a complex problem seen the necessity of meeting the requirements of security been imperative by the regulations, and of economy been imperative by the increasing costs of the structures. The resistance of a building in the horizontal actions (shares) is mainly ensured by a mixed brace system; for a concrete building, this system is constituted by frame or shells; or both at the same time. After the earthquake of Boumerdes (May 23; 2003) in Algeria, the studies made by experts, ended in modifications of the Algerian Earthquake-resistant Regulation (AER 99). One of these modifications was to widen the use of shells for the brace system. This modification has create a conflict on the quantities, the positions and the type of the shells at adopt. In the present project, we suggest seeing the effect of the variation of the dimensions, the localization and the conditions of rigidity in extremities of shells. The study will be led on a building (F+5) implanted in zone of seismicity average. To do it, we shall proceed to a classic dynamic study of a structure by using 4 alternatives for shells by varying the lengths and number in order to compare the cost of the structure for 4 dispositions of the shells with a technical-economic study of the brace system by the use of different dispositions of shells and to estimate the quantities of necessary materials (concrete and steel).

Keywords: reinforced concrete, mixed brace system, dynamic analysis, beams, shells

Procedia PDF Downloads 325
17665 Convectory Policing-Reconciling Historic and Contemporary Models of Police Service Delivery

Authors: Mark Jackson

Abstract:

Description: This paper is based on an theoretical analysis of the efficacy of the dominant model of policing in western jurisdictions. Those results are then compared with a similar analysis of a traditional reactive model. It is found that neither model provides for optimal delivery of services. Instead optimal service can be achieved by a synchronous hybrid model, termed the Convectory Policing approach. Methodology and Findings: For over three decades problem oriented policing (PO) has been the dominant model for western police agencies. Initially based on the work of Goldstein during the 1970s the problem oriented framework has spawned endless variants and approaches, most of which embrace a problem solving rather than a reactive approach to policing. This has included the Area Policing Concept (APC) applied in many smaller jurisdictions in the USA, the Scaled Response Policing Model (SRPM) currently under trial in Western Australia and the Proactive Pre-Response Approach (PPRA) which has also seen some success. All of these, in some way or another, are largely based on a model that eschews a traditional reactive model of policing. Convectory Policing (CP) is an alternative model which challenges the underpinning assumptions which have seen proliferation of the PO approach in the last three decades and commences by questioning the economics on which PO is based. It is argued that in essence, the PO relies on an unstated, and often unrecognised assumption that resources will be available to meet demand for policing services, while at the same time maintaining the capacity to deploy staff to develop solutions to the problems which were ultimately manifested in those same calls for service. The CP model relies on the observations from a numerous western jurisdictions to challenge the validity of that underpinning assumption, particularly in fiscally tight environment. In deploying staff to pursue and develop solutions to underpinning problems, there is clearly an opportunity cost. Those same staff cannot be allocated to alternative duties while engaged in a problem solution role. At the same time, resources in use responding to calls for service are unavailable, while committed to that role, to pursue solutions to the problems giving rise to those same calls for service. The two approaches, reactive and PO are therefore dichotomous. One cannot be optimised while the other is being pursued. Convectory Policing is a pragmatic response to the schism between the competing traditional and contemporary models. If it is not possible to serve either model with any real rigour, it becomes necessary to taper an approach to deliver specific outcomes against which success or otherwise might be measured. CP proposes that a structured roster-driven approach to calls for service, combined with the application of what is termed a resource-effect response capacity has the potential to resolve the inherent conflict between traditional and models of policing and the expectations of the community in terms of community policing based problem solving models.

Keywords: policing, reactive, proactive, models, efficacy

Procedia PDF Downloads 483
17664 Dynamic Measurement System Modeling with Machine Learning Algorithms

Authors: Changqiao Wu, Guoqing Ding, Xin Chen

Abstract:

In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.

Keywords: dynamic system modeling, neural network, normal equation, second order gradient descent

Procedia PDF Downloads 127