Search results for: elliptic curve digital signature algorithm
237 Unmanned Aerial System Development for the Remote Reflectance Sensing Using Above-Water Radiometers
Authors: Sunghun Jung, Wonkook Kim
Abstract:
Due to the difficulty of the utilization of satellite and an aircraft, conventional ocean color remote sensing has a disadvantage in that it is difficult to obtain images of desired places at desired times. These disadvantages make it difficult to capture the anomalies such as the occurrence of the red tide which requires immediate observation. It is also difficult to understand the phenomena such as the resuspension-precipitation process of suspended solids and the spread of low-salinity water originating in the coastal areas. For the remote sensing reflectance of seawater, above-water radiometers (AWR) have been used either by carrying portable AWRs on a ship or installing those at fixed observation points on the Ieodo ocean research station, Socheongcho base, and etc. In particular, however, it requires the high cost to measure the remote reflectance in various seawater environments at various times and it is even not possible to measure it at the desired frequency in the desired sea area at the desired time. Also, in case of the stationary observation, it is advantageous that observation data is continuously obtained, but there is the disadvantage that data of various sea areas cannot be obtained. It is possible to instantly capture various marine phenomena occurring on the coast using the unmanned aerial system (UAS) including vertical takeoff and landing (VTOL) type unmanned aerial vehicles (UAV) since it could move and hover at the one location and acquire data of the desired form at a high resolution. To remotely estimate seawater constituents, it is necessary to install an ultra-spectral sensor. Also, to calculate reflected light from the surface of the sea in consideration of the sun’s incident light, a total of three sensors need to be installed on the UAV. The remote sensing reflectance of seawater is the most basic optical property for remotely estimating color components in seawater and we could remotely estimate the chlorophyll concentration, the suspended solids concentration, and the dissolved organic amount. Estimating seawater physics from the remote sensing reflectance requires the algorithm development using the accumulation data of seawater reflectivity under various seawater and atmospheric conditions. The UAS with three AWRs is developed for the remote reflection sensing on the surface of the sea. Throughout the paper, we explain the details of each UAS component, system operation scenarios, and simulation and experiment results. The UAS consists of a UAV, a solar tracker, a transmitter, a ground control station (GCS), three AWRs, and two gimbals.Keywords: above-water radiometers (AWR), ground control station (GCS), unmanned aerial system (UAS), unmanned aerial vehicle (UAV)
Procedia PDF Downloads 163236 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 303235 Characterization of Alloyed Grey Cast Iron Quenched and Tempered for a Smooth Roll Application
Authors: Mohamed Habireche, Nacer E. Bacha, Mohamed Djeghdjough
Abstract:
In the brick industry, smooth double roll crusher is used for medium and fine crushing of soft to medium hard material. Due to opposite inward rotation of the rolls, the feed material is nipped between the rolls and crushed by compression. They are subject to intense wear, known as three-body abrasion, due to the action of abrasive products. The production downtime affecting productivity stems from two sources: the bi-monthly rectification of the roll crushers and their replacement when they are completely worn out. Choosing the right material for the roll crushers should result in longer machine cycles, and reduced repair and maintenance costs. All roll crushers are imported from outside Algeria. This results in sometimes very long delivery times which handicap the brickyards, in particular in respecting delivery times and honored the orders made by customers. The aim of this work is to investigate the effect of alloying additions on microstructure and wear behavior of grey lamellar cast iron for smooth roll crushers in brick industry. The base gray iron was melted in an induction furnace with low frequency at a temperature of 1500 °C, in which return cast iron scrap, new cast iron ingot, and steel scrap were added to the melt to generate the desired composition. The chemical analysis of the bar samples was carried out using Emission Spectrometer Systems PV 8050 Series (Philips) except for the carbon, for which a carbon/sulphur analyser Elementrac CS-i was used. Unetched microstructure was used to evaluate the graphite flake morphology using the image comparison measurement method. At least five different fields were selected for quantitative estimation of phase constituents. The samples were observed under X100 magnification with a Zeiss Axiover T40 MAT optical microscope equipped with a digital camera. SEM microscope equipped with EDS was used to characterize the phases present in the microstructure. The hardness (750 kg load, 5mm diameter ball) was measured with a Brinell testing machine for both treated and as-solidified condition test pieces. The test bars were used for tensile strength and metallographic evaluations. Mechanical properties were evaluated using tensile specimens made as per ASTM E8 standards. Two specimens were tested for each alloy. From each rod, a test piece was made for the tensile test. The results showed that the quenched and tempered alloys had best wear resistance at 400 °C for alloyed grey cast iron (containing 0.62%Mn, 0.68%Cr, and 1.09% Cu) due to fine carbides in the tempered matrix. In quenched and tempered condition, increasing Cu content in cast irons improved its wear resistance moderately. Combined addition of Cu and Cr increases hardness and wear resistance for a quenched and tempered hypoeutectic grey cast iron.Keywords: casting, cast iron, microstructure, heat treating
Procedia PDF Downloads 105234 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor
Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar
Abstract:
Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration
Procedia PDF Downloads 190233 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques
Authors: Imed Feki, Faouzi Msahli
Abstract:
Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique
Procedia PDF Downloads 605232 The Association between C-Reactive Protein and Hypertension with Different US Participants Ethnicity-Findings from National Health and Nutrition Examination Survey 1999-2010
Authors: Ghada Abo-Zaid
Abstract:
The main objective of this study was to examine the association between the elevated level of CRP and incidence of hypertension before and after adjusting by age, BMI, gender, SES, smoking, diabetes, cholesterol LDL and cholesterol HDL and to determine whether the association were differ by race. Method: Cross sectional data for participations from age 17 to age 74 years who included in The National Health and Nutrition Examination Survey (NHANES) from 1999 to 2010 were analysed. CRP level was classified into three categories ( > 3mg/L, between 1mg/LL and 3mg/L, and < 3 mg/L). Blood pressure categorization was done using JNC 7 algorithm Hypertension defined as either systolic blood pressure (SBP) of 140 mmHg or more and disystolic blood pressure (DBP) of 90mmHg or greater, otherwise a self-reported prior diagnosis by a physician. Pre-hypertension was defined as (139 > SBP > 120 or 89 > DPB > 80). Multinominal regression model was undertaken to measure the association between CRP level and hypertension. Results: In univariable models, CRP concentrations > 3 mg/L were associated with a 73% greater risk of incident hypertension compared with CRP concentrations < 1 mg/L (Hypertension: odds ratio [OR] = 1.73; 95% confidence interval [CI], 1.50-1.99). Ethnic comparisons showed that American Mexican had the highest risk of incident hypertension (odds ratio [OR] = 2.39; 95% confidence interval [CI], 2.21-2.58).This risk was statistically insignificant, however, either after controlling by other variables (Hypertension: OR = 0.75; 95% CI, 0.52-1.08,), or categorized by race [American Mexican: odds ratio [OR] = 1.58; 95% confidence interval [CI], 0,58-4.26, Other Hispanic: odds ratio [OR] = 0.87; 95% confidence interval [CI], 0.19-4.42, Non-Hispanic white: odds ratio [OR] = 0.90; 95% confidence interval [CI], 0.50-1.59, Non-Hispanic Black: odds ratio [OR] = 0.44; 95% confidence interval [CI], 0.22-0,87]. The same results were found for pre-hypertension, and the Non-Hispanic black showed the highest significant risk for Pre-Hypertension (odds ratio [OR] = 1.60; 95% confidence interval [CI], 1.26-2.03). When CRP concentrations were between 1.0-3.0 mg/L, in an unadjusted models prehypertension was associated with higher likelihood of elevated CRP (OR = 1.37; 95% CI, 1.15-1.62). The same relationship was maintained in Non-Hispanic white, Non-Hispanic black, and other race (Non-Hispanic white: OR = 1.24; 95% CI, 1.03-1.48, Non-Hispanic black: OR = 1.60; 95% CI, 1.27-2.03, other race: OR = 2.50; 95% CI, 1.32-4.74) while the association was insignificant with American Mexican and other Hispanic. In the adjusted model, the relationship between CRP and prehypertension were no longer available. In contrary, Hypertension was not independently associated with elevated CRP, and the results were the same after grouped by race or adjusted by the confounder variables. The same results were obtained when SBP or DBP were on a continuous measure. Conclusions: This study confirmed the existence of an association between hypertension, prehypertension and elevated level of CRP, however this association was no longer available after adjusting by other variables. Ethic group differences were statistically significant at the univariable models, while it disappeared after controlling by other variables.Keywords: CRP, hypertension, ethnicity, NHANES, blood pressure
Procedia PDF Downloads 413231 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network
Authors: Gulfam Haider, sana danish
Abstract:
Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent
Procedia PDF Downloads 125230 Boost for Online Language Course through Peer Evaluation
Authors: Kirsi Korkealehto
Abstract:
The purpose of this research was to investigate how the peer evaluation concept was perceived by language teachers developing online language courses. The online language courses in question were developed in language teacher teams within a nationwide KiVAKO-project funded by the Finnish Ministry of Education and Culture. The participants of the project were 86 language teachers of 26 higher education institutions in Finland. The KiVAKO-project aims to strengthen the language capital at higher education institutions by building a nationwide online language course offering on a shared platform. All higher education students can study the courses regardless of their home institutions. The project covers the following languages: Chinese, Estonian, Finnish Sign Language, French, German, Italian, Japanese, Korean, Portuguese, Russian, and Spanish on the levels CEFR A1-C1. The courses were piloted in the autumn term of 2019, and an online peer evaluation session was organised for all project participating teachers in spring 2020. The peer evaluation utilised the quality criteria for online implementation, which was developed earlier within the eAMK-project. The eAMK-project was also funded by the Finnish Ministry of Education and Culture with the aim to improve higher education institution teachers’ digital and pedagogical competences. In the online peer evaluation session, the teachers were divided into Zoom breakout rooms, in each of which two pilot courses were presented by their teachers dialogically. The other language teachers provided feedback on the course on the basis of the quality criteria. Thereafter good practices and ideas were gathered to an online document. The breakout rooms were facilitated by one teacher who was instructed and provided a slide-set prior to the online session. After the online peer evaluation sessions, the language teachers were asked to respond to an online questionnaire for feedback. The questionnaire included three multiple-choice questions using the Likert-scale rating and two open-ended questions. The online questionnaire was answered after the sessions immediately, the questionnaire link and the QR-code to it was on the last slide of the session, and it was responded at the site. The data comprise online questionnaire responses of the peer evaluation session and the researcher’s observations during the sessions. The data were analysed with a qualitative content analysis method with the help of Atlas.ti programme, and the Likert scale answers provided results per se. The observations were used as complementary data to support the primary data. The findings indicate that the working in the breakout rooms was successful, and the workshops proceeded smoothly. The workshops were perceived as beneficial in terms of improving the piloted courses and developing the participants’ own work as teachers. Further, the language teachers stated that the collegial discussions and sharing the ideas were fruitful. The aspects to improve the workshops were to give more time for free discussions and the opportunity to familiarize oneself with the quality criteria and the presented language courses beforehand. The quality criteria were considered to provide a suitable frame for self- and peer evaluations.Keywords: higher education, language learning, online learning, peer-evaluation
Procedia PDF Downloads 126229 Modeling and Implementation of a Hierarchical Safety Controller for Human Machine Collaboration
Authors: Damtew Samson Zerihun
Abstract:
This paper primarily describes the concept of a hierarchical safety control (HSC) in discrete manufacturing to up-hold productivity with human intervention and machine failures using a systematic approach, through increasing the system availability and using additional knowledge on machines so as to improve the human machine collaboration (HMC). It also highlights the implemented PLC safety algorithm, in applying this generic concept to a concrete pro-duction line using a lab demonstrator called FATIE (Factory Automation Test and Integration Environment). Furthermore, the paper describes a model and provide a systematic representation of human-machine collabora-tion in discrete manufacturing and to this end, the Hierarchical Safety Control concept is proposed. This offers a ge-neric description of human-machine collaboration based on Finite State Machines (FSM) that can be applied to vari-ous discrete manufacturing lines instead of using ad-hoc solutions for each line. With its reusability, flexibility, and extendibility, the Hierarchical Safety Control scheme allows upholding productivity while maintaining safety with reduced engineering effort compared to existing solutions. The approach to the solution begins with a successful partitioning of different zones around the Integrated Manufacturing System (IMS), which are defined by operator tasks and the risk assessment, used to describe the location of the human operator and thus to identify the related po-tential hazards and trigger the corresponding safety functions to mitigate it. This includes selective reduced speed zones and stop zones, and in addition with the hierarchical safety control scheme and advanced safety functions such as safe standstill and safe reduced speed are used to achieve the main goals in improving the safe Human Ma-chine Collaboration and increasing the productivity. In a sample scenarios, It is shown that an increase of productivity in the order of 2.5% is already possible with a hi-erarchical safety control, which consequently under a given assumptions, a total sum of 213 € could be saved for each intervention, compared to a protective stop reaction. Thereby the loss is reduced by 22.8%, if occasional haz-ard can be refined in a hierarchical way. Furthermore, production downtime due to temporary unavailability of safety devices can be avoided with safety failover that can save millions per year. Moreover, the paper highlights the proof of the development, implementation and application of the concept on the lab demonstrator (FATIE), where it is realized on the new safety PLCs, Drive Units, HMI as well as Safety devices in addition to the main components of the IMS.Keywords: discrete automation, hierarchical safety controller, human machine collaboration, programmable logical controller
Procedia PDF Downloads 369228 Kinematic Modelling and Task-Based Synthesis of a Passive Architecture for an Upper Limb Rehabilitation Exoskeleton
Authors: Sakshi Gupta, Anupam Agrawal, Ekta Singla
Abstract:
An exoskeleton design for rehabilitation purpose encounters many challenges, including ergonomically acceptable wearing technology, architectural design human-motion compatibility, actuation type, human-robot interaction, etc. In this paper, a passive architecture for upper limb exoskeleton is proposed for assisting in rehabilitation tasks. Kinematic modelling is detailed for task-based kinematic synthesis of the wearable exoskeleton for self-feeding tasks. The exoskeleton architecture possesses expansion and torsional springs which are able to store and redistribute energy over the human arm joints. The elastic characteristics of the springs have been optimized to minimize the mechanical work of the human arm joints. The concept of hybrid combination of a 4-bar parallelogram linkage and a serial linkage were chosen, where the 4-bar parallelogram linkage with expansion spring acts as a rigid structure which is used to provide the rotational degree-of-freedom (DOF) required for lowering and raising of the arm. The single linkage with torsional spring allows for the rotational DOF required for elbow movement. The focus of the paper is kinematic modelling, analysis and task-based synthesis framework for the proposed architecture, keeping in considerations the essential tasks of self-feeding and self-exercising during rehabilitation of partially healthy person. Rehabilitation of primary functional movements (activities of daily life, i.e., ADL) is routine activities that people tend to every day such as cleaning, dressing, feeding. We are focusing on the feeding process to make people independent in respect of the feeding tasks. The tasks are focused to post-surgery patients under rehabilitation with less than 40% weakness. The challenges addressed in work are ensuring to emulate the natural movement of the human arm. Human motion data is extracted through motion-sensors for targeted tasks of feeding and specific exercises. Task-based synthesis procedure framework will be discussed for the proposed architecture. The results include the simulation of the architectural concept for tracking the human-arm movements while displaying the kinematic and static study parameters for standard human weight. D-H parameters are used for kinematic modelling of the hybrid-mechanism, and the model is used while performing task-based optimal synthesis utilizing evolutionary algorithm.Keywords: passive mechanism, task-based synthesis, emulating human-motion, exoskeleton
Procedia PDF Downloads 137227 Motivation and Multiglossia: Exploring the Diversity of Interests, Attitudes, and Engagement of Arabic Learners
Authors: Anna-Maria Ramezanzadeh
Abstract:
Demand for Arabic language is growing worldwide, driven by increased interest in the multifarious purposes the language serves, both for the population of heritage learners and those studying Arabic as a foreign language. The diglossic, or indeed multiglossic nature of the language as used in Arabic speaking communities however, is seldom represented in the content of classroom courses. This disjoint between the nature of provision and students’ expectations can severely impact their engagement with course material, and their motivation to either commence or continue learning the language. The nature of motivation and its relationship to multiglossia is sparsely explored in current literature on Arabic. The theoretical framework here proposed aims to address this gap by presenting a model and instruments for the measurement of Arabic learners’ motivation in relation to the multiple strands of the language. It adopts and develops the Second Language Motivation Self-System model (L2MSS), originally proposed by Zoltan Dörnyei, which measures motivation as the desire to reduce the discrepancy between leaners’ current and future self-concepts in terms of the second language (L2). The tripartite structure incorporates measures of the Current L2 Self, Future L2 Self (consisting of an Ideal L2 Self, and an Ought-To Self), and the L2 Learning Experience. The strength of the self-concepts is measured across three different domains of Arabic: Classical, Modern Standard and Colloquial. The focus on learners’ self-concepts allows for an exploration of the effect of multiple factors on motivation towards Arabic, including religion. The relationship between Islam and Arabic is often given as a prominent reason behind some students’ desire to learn the language. Exactly how and why this factor features in learners’ L2 self-concepts has not yet been explored. Specifically designed surveys and interview protocols are proposed to facilitate the exploration of these constructs. The L2 Learning Experience component of the model is operationalized as learners’ task-based engagement. Engagement is conceptualised as multi-dimensional and malleable. In this model, situation-specific measures of cognitive, behavioural, and affective components of engagement are collected via specially designed repeated post-task self-report surveys on Personal Digital Assistant over multiple Arabic lessons. Tasks are categorised according to language learning skill. Given the domain-specific uses of the different varieties of Arabic, the relationship between learners’ engagement with different types of tasks and their overall motivational profiles will be examined to determine the extent of the interaction between the two constructs. A framework for this data analysis is proposed and hypotheses discussed. The unique combination of situation-specific measures of engagement and a person-oriented approach to measuring motivation allows for a macro- and micro-analysis of the interaction between learners and the Arabic learning process. By combining cross-sectional and longitudinal elements with a mixed-methods design, the model proposed offers the potential for capturing a comprehensive and detailed picture of the motivation and engagement of Arabic learners. The application of this framework offers a number of numerous potential pedagogical and research implications which will also be discussed.Keywords: Arabic, diglossia, engagement, motivation, multiglossia, sociolinguistics
Procedia PDF Downloads 166226 Mathematical Toolbox for editing Equations and Geometrical Diagrams and Graphs
Authors: Ayola D. N. Jayamaha, Gihan V. Dias, Surangika Ranathunga
Abstract:
Currently there are lot of educational tools designed for mathematics. Open source software such as GeoGebra and Octave are bulky in their architectural structure. In addition, there is MathLab software, which facilitates much more than what we ask for. Many of the computer aided online grading and assessment tools require integrating editors to their software. However, there are not exist suitable editors that cater for all their needs in editing equations and geometrical diagrams and graphs. Some of the existing software for editing equations is Alfred’s Equation Editor, Codecogs, DragMath, Maple, MathDox, MathJax, MathMagic, MathFlow, Math-o-mir, Microsoft Equation Editor, MiraiMath, OpenOffice, WIRIS Editor and MyScript. Some of them are commercial, open source, supports handwriting recognition, mobile apps, renders MathML/LaTeX, Flash / Web based and javascript display engines. Some of the diagram editors are GeoKone.NET, Tabulae, Cinderella 1.4, MyScript, Dia, Draw2D touch, Gliffy, GeoGebra, Flowchart, Jgraph, JointJS, J painter Online diagram editor and 2D sketcher. All these software are open source except for MyScript and can be used for editing mathematical diagrams. However, they do not fully cater the needs of a typical computer aided assessment tool or Educational Platform for Mathematics. This solution provides a Web based, lightweight, easy to implement and integrate solution of an html5 canvas that renders on all of the modern web browsers. The scope of the project is an editor that covers equations and mathematical diagrams and drawings on the O/L Mathematical Exam Papers in Sri Lanka. Using the tool the students can enter any equation to the system which can be on an online remote learning platform. The users can also create and edit geometrical drawings, graphs and do geometrical constructions that require only Compass and Ruler from the Editing Interface provided by the Software. The special feature of this software is the geometrical constructions. It allows the users to create geometrical constructions such as angle bisectors, perpendicular lines, angles of 600 and perpendicular bisectors. The tool correctly imitates the functioning of rulers and compasses to create the required geometrical construction. Therefore, the users are able to do geometrical drawings on the computer successfully and we have a digital format of the geometrical drawing for further processing. Secondly, we can create and edit Venn Diagrams, color them and label them. In addition, the students can draw probability tree diagrams and compound probability outcome grids. They can label and mark regions within the grids. Thirdly, students can draw graphs (1st order and 2nd order). They can mark points on a graph paper and the system connects the dots to draw the graph. Further students are able to draw standard shapes such as circles and rectangles by selecting points on a grid or entering the parametric values.Keywords: geometrical drawings, html5 canvas, mathematical equations, toolbox
Procedia PDF Downloads 377225 From Shelf to Shell - The Corporate Form in the Era of Over-Regulation
Authors: Chrysthia Papacleovoulou
Abstract:
The era of de-regulation, off-shore and tax haven jurisdictions, and shelf companies has come to an end. The usage of complex corporate structures involving trust instruments, special purpose vehicles, holding-subsidiaries in offshore haven jurisdictions, and taking advantage of tax treaties is soaring. States which raced to introduce corporate friendly legislation, tax incentives, and creative international trust law in order to attract greater FDI are now faced with regulatory challenges and are forced to revisit the corporate form and its tax treatment. The fiduciary services industry, which dominated over the last 3 decades, is now striving to keep up with the new regulatory framework as a result of a number of European and international legislative measures. This article considers the challenges to the company and the corporate form as a result of the legislative measures on tax planning and tax avoidance, CRS reporting, FATCA, CFC rules, OECD’s BEPS, the EU Commission's new transparency rules for intermediaries that extends to tax advisors, accountants, banks & lawyers who design and promote tax planning schemes for their clients, new EU rules to block artificial tax arrangements and new transparency requirements for financial accounts, tax rulings and multinationals activities (DAC 6), G20's decision for a global 15% minimum corporate tax and banking regulation. As a result, states are found in a race of over-regulation and compliance. These legislative measures constitute a global up-side down tax-harmonisation. Through the adoption of the OECD’s BEPS, states agreed to an international collaboration to end tax avoidance and reform international taxation rules. Whilst the idea was to ensure that multinationals would pay their fair share of tax everywhere they operate, an indirect result of the aforementioned regulatory measures was to attack private clients-individuals who -over the past 3 decades- used the international tax system and jurisdictions such as Marshal Islands, Cayman Islands, British Virgin Islands, Bermuda, Seychelles, St. Vincent, Jersey, Guernsey, Liechtenstein, Monaco, Cyprus, and Malta, to name but a few, to engage in legitimate tax planning and tax avoidance. Companies can no longer maintain bank accounts without satisfying the real substance test. States override the incorporation doctrine theory and apply a real seat or real substance test in taxing companies and their activities, targeting even the beneficial owners personally with tax liability. Tax authorities in civil law jurisdictions lift the corporate veil through the public registries of UBO Registries and Trust Registries. As a result, the corporate form and the doctrine of limited liability are challenged in their core. Lastly, this article identifies the development of new instruments, such as funds and private placement insurance policies, and the trend of digital nomad workers. The baffling question is whether industry and states can meet somewhere in the middle and exit this over-regulation frenzy.Keywords: company, regulation, TAX, corporate structure, trust vehicles, real seat
Procedia PDF Downloads 139224 Study of a Decentralized Electricity Market on Awaji Island
Authors: Arkadiusz P. Wójcik, Tetsuya Sato, Shin-Ichiro Shima, Mateusz Malanowski
Abstract:
Over the last decades, new technologies have significantly changed the way information is transmitted and stored. Renewable energy sources have become prevalent and affordable. Cooperation of the Information and Communication Technology industry and Renewable Energy industry makes it possible to create a next generation, decentralized power grid. In this context, the study seeks to identify the wider benefits to the local Japanese economy as a result of the development of a decentralised electricity market. Our general approach aims to integrate an economic analysis (monetary appraisal of costs and benefits to society) with externalities that are not quantifiable in monetary terms (e.g. social impact, environmental impact). The study also highlights opportunities and sets out recommendations for the citizens of the island and the local government. The simulation is the scientific basis for economic impact analysis. Various types of sources of energy have been taken into account: residential wind farm, residential wind turbine, solar farm, residential solar panels and private solar farms. Analysis of local geographic and economic conditions allowed creating a customized business model. Very often farmers on Awaji Island are using crop cycle. During each cycle, one part of the field is resting and replenishing nutrients. In the next year another part of the field is resting. Portable solar panels could be freely set up in this part of the field. At the end of the crop cycle, portable solar panels would be moved to the next resting part. Because of spacious area, for a single household 500 square meters of portable solar panels has been proposed and simulated. The devised simulation shows that the Rate of Return on Investment for solar panels, which are on the island, could reach up to 37.21%. Supposing that about 20% of households install solar panels they could produce 49.11% of the electric energy consumed by households on the island. The analysis shows that rest of the energy supply can be produced by currently existing one huge solar farm and two wind farms to meet 97.59% of demand on electricity for households on the island. Although there are more than 7,000 agricultural fields on the island, young people tend to avoid agricultural work and prefer to move from the island to big cities, live there in little mansions and work until late night. The business model proposed in this study could increase farmer’s monthly income by ¥200,000 - ¥300,000 (1,600 euro – 2,400 euro). Young people could work less and have a higher standard of living than in a city. Creation of a decentralized electricity market can unlock significant benefits in other industries (e.g. electric vehicles), providing a welcome boost to economic growth, jobs and quality of life.Keywords: digital twin, Matlab, model-based systems engineering, simulink, smart grid, systems engineering
Procedia PDF Downloads 121223 Cluster-Based Exploration of System Readiness Levels: Mathematical Properties of Interfaces
Authors: Justin Fu, Thomas Mazzuchi, Shahram Sarkani
Abstract:
A key factor in technological immaturity in defense weapons acquisition is lack of understanding critical integrations at the subsystem and component level. To address this shortfall, recent research in integration readiness level (IRL) combines with technology readiness level (TRL) to form a system readiness level (SRL). SRL can be enriched with more robust quantitative methods to provide the program manager a useful tool prior to committing to major weapons acquisition programs. This research harnesses previous mathematical models based on graph theory, Petri nets, and tropical algebra and proposes a modification of the desirable SRL mathematical properties such that a tightly integrated (multitude of interfaces) subsystem can display a lower SRL than an inherently less coupled subsystem. The synthesis of these methods informs an improved decision tool for the program manager to commit to expensive technology development. This research ties the separately developed manufacturing readiness level (MRL) into the network representation of the system and addresses shortfalls in previous frameworks, including the lack of integration weighting and the over-importance of a single extremely immature component. Tropical algebra (based on the minimum of a set of TRLs or IRLs) allows one low IRL or TRL value to diminish the SRL of the entire system, which may not be reflective of actuality if that component is not critical or tightly coupled. Integration connections can be weighted according to importance and readiness levels are modified to be a cardinal scale (based on an analytic hierarchy process). Integration arcs’ importance are dependent on the connected nodes and the additional integrations arcs connected to those nodes. Lack of integration is not represented by zero, but by a perfect integration maturity value. Naturally, the importance (or weight) of such an arc would be zero. To further explore the impact of grouping subsystems, a multi-objective genetic algorithm is then used to find various clusters or communities that can be optimized for the most representative subsystem SRL. This novel calculation is then benchmarked through simulation and using past defense acquisition program data, focusing on the newly introduced Middle Tier of Acquisition (rapidly field prototypes). The model remains a relatively simple, accessible tool, but at higher fidelity and validated with past data for the program manager to decide major defense acquisition program milestones.Keywords: readiness, maturity, system, integration
Procedia PDF Downloads 92222 The Evolution of Moral Politics: Analysis on Moral Foundations of Korean Parties
Authors: Changdong Oh
Abstract:
With the arrival of post-industrial society, social scientists have been giving attention to issues of which factors shape cleavage of political parties. Especially, there is a heated controversy over whether and how social and cultural values influence the identities of parties and voting behavior. Drawing from Moral Foundations Theory (MFT), which approached similar issues by considering the effect of five moral foundations on political decision-making of people, this study investigates the role of moral rhetoric in the evolution of Korean political parties. Researcher collected official announcements released by the major two parties (Democratic Party of Korea, Saenuri Party) from 2007 to 2016, and analyzed the data by using Word2Vec algorithm and Moral Foundations Dictionary. Five moral decision modules of MFT, composed of care, fairness (individualistic morality), loyalty, authority and sanctity (group-based, Durkheimian morality), can be represented in vector spaces consisted of party announcements data. By comparing the party vector and the five morality vectors, researcher can see how the political parties have actively used each of the five moral foundations to express themselves and the opposition. Results report that the conservative party tends to actively draw on collective morality such as loyalty, authority, purity to differentiate itself. Notably, such moral differentiation strategy is prevalent when they criticize an opposition party. In contrast, the liberal party tends to concern with individualistic morality such as fairness. This result indicates that moral cleavage does exist between parties in South Korea. Furthermore, individualistic moral gaps of the two political parties are eased over time, which seems to be due to the discussion of economic democratization of conservative party that emerged after 2012, but the community-related moral gaps widened. These results imply that past political cleavages related to economic interests are diminishing and replaced by cultural and social values associated with communitarian morality. However, since the conservative party’s differentiation strategy is largely related to negative campaigns, it is doubtful whether such moral differentiation among political parties can contribute to the long-term party identification of the voters, thus further research is needed to determine it is sustainable. Despite the limitations, this study makes it possible to track and identify the moral changes of party system through automated text analysis. More generally, this study could contribute to the analysis of various texts associated with the moral foundation and finding a distributed representation of moral, ethical values.Keywords: moral foundations theory, moral politics, party system, Word2Vec
Procedia PDF Downloads 362221 Affects Associations Analysis in Emergency Situations
Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko
Abstract:
Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.Keywords: data mining, emergency phone calls, emotional profiles, rules
Procedia PDF Downloads 408220 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 178219 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve
Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick
Abstract:
Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin
Procedia PDF Downloads 150218 Solutions for Food-Safe 3D Printing
Authors: Geremew Geidare Kailo, Igor Gáspár, András Koris, Ivana Pajčin, Flóra Vitális, Vanja Vlajkov
Abstract:
Three-dimension (3D) printing, a very popular additive manufacturing technology, has recently undergone rapid growth and replaced the use of conventional technology from prototyping to producing end-user parts and products. The 3D Printing technology involves a digital manufacturing machine that produces three-dimensional objects according to designs created by the user via 3D modeling or computer-aided design/manufacturing (CAD/CAM) software. The most popular 3D printing system is Fused Deposition Modeling (FDM) or also called Fused Filament Fabrication (FFF). A 3D-printed object is considered food safe if it can have direct contact with the food without any toxic effects, even after cleaning, storing, and reusing the object. This work analyzes the processing timeline of the filament (material for 3D printing) from unboxing to the extrusion through the nozzle. It is an important task to analyze the growth of bacteria on the 3D printed surface and in gaps between the layers. By default, the 3D-printed object is not food safe after longer usage and direct contact with food (even though they use food-safe filaments), but there are solutions for this problem. The aim of this work was to evaluate the 3D-printed object from different perspectives of food safety. Firstly, testing antimicrobial 3D printing filaments from a food safety aspect since the 3D Printed object in the food industry may have direct contact with the food. Therefore, the main purpose of the work is to reduce the microbial load on the surface of a 3D-printed part. Coating with epoxy resin was investigated, too, to see its effect on mechanical strength, thermal resistance, surface smoothness and food safety (cleanability). Another aim of this study was to test new temperature-resistant filaments and the effect of high temperature on 3D printed materials to see if they can be cleaned with boiling or similar hi-temp treatment. This work proved that all three mentioned methods could improve the food safety of the 3D printed object, but the size of this effect variates. The best result we got was with coating with epoxy resin, and the object was cleanable like any other injection molded plastic object with a smooth surface. Very good results we got by boiling the objects, and it is good to see that nowadays, more and more special filaments have a food-safe certificate and can withstand boiling temperatures too. Using antibacterial filaments reduced bacterial colonies to 1/5, but the biggest advantage of this method is that it doesn’t require any post-processing. The object is ready out of the 3D printer. Acknowledgements: The research was supported by the Hungarian and Serbian bilateral scientific and technological cooperation project funded by the Hungarian National Office for Research, Development and Innovation (NKFI, 2019-2.1.11-TÉT-2020-00249) and the Ministry of Education, Science and Technological Development of the Republic of Serbia. The authors acknowledge the Hungarian University of Agriculture and Life Sciences’s Doctoral School of Food Science for the support in this studyKeywords: food safety, 3D printing, filaments, microbial, temperature
Procedia PDF Downloads 142217 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 134216 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations
Authors: Milena Nanova, Radul Shishkov, Martin Georgiev, Damyan Damov
Abstract:
This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper explores how modern digital tools, particularly computational design, and algorithmic modelling, can optimize the early stages of residential building design. By creating a basic parametric model of a residential district, the paper investigates how automated design tools can explore multiple design variants based on predefined parameters (e.g., building cost, dimensions, orientation) and constraints. The paper aims to demonstrate how these tools can rapidly generate and refine architectural solutions that meet the required criteria for quality of life, cost efficiency, and functionality. The study utilizes computational design for database processing and algorithmic modelling within the fields of applied geodesy and architecture. It focuses on optimizing the forms of residential development by adjusting specific parameters and constraints. The results of multiple iterations are analysed, refined, and selected based on their alignment with predefined quality and cost criteria. The findings of this research will contribute to a modern, complex approach to residential area design. The paper demonstrates the potential for integrating BIM models into the design process and their application in virtual 3D Geographic Information Systems (GIS) environments. The study also examines the transformation of BIM models into suitable 3D GIS file formats, such as CityGML, to facilitate the visualization and evaluation of urban planning solutions. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the investment during its entire lifecycle.Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization
Procedia PDF Downloads 6215 Facilitating the Learning Environment as a Servant Leader: Empowering Self-Directed Student Learning
Authors: Thomas James Bell III
Abstract:
Pedagogy is thought of as one's philosophy, theory, or teaching method. This study examines the science of learning, considering the forced reconsideration of effective pedagogy brought on by the aftermath of the 2020 coronavirus pandemic. With the aid of various technologies, online education holds challenges and promises to enhance the learning environment if implemented to facilitate student learning. Behaviorism centers around the belief that the instructor is the sage on the classroom stage using repetition techniques as the primary learning instrument. This approach to pedagogy ascribes complete control of the learning environment and works best for students to learn by allowing students to answer questions with immediate feedback. Such structured learning reinforcement tends to guide students' learning without considering learners' independence and individual reasoning. And such activities may inadvertently stifle the student's ability to develop critical thinking and self-expression skills. Fundamentally liberationism pedagogy dismisses the concept that education is merely about students learning things and more about the way students learn. Alternatively, the liberationist approach democratizes the classroom by redefining the role of the teacher and student. The teacher is no longer viewed as the sage on the stage but as a guide on the side. Instead, this approach views students as creators of knowledge and not empty vessels to be filled with knowledge. Moreover, students are well suited to decide how best to learn and which areas improvements are needed. This study will explore the classroom instructor as a servant leader in the twenty-first century, which allows students to integrate technology that encapsulates more individual learning styles. The researcher will examine the Professional Scrum Master (PSM I) exam pass rate results of 124 students in six sections of an Agile scrum course. The students will be separated into two groups; the first group will follow a structured instructor-led course outlined by a course syllabus. The second group will consist of several small teams (ten or fewer) of self-led and self-empowered students. The teams will conduct several event meetings that include sprint planning meetings, daily scrums, sprint reviews, and retrospective meetings throughout the semester will the instructor facilitating the teams' activities as needed. The methodology for this study will use the compare means t-test to compare the mean of an exam pass rate in one group to the mean of the second group. A one-tailed test (i.e., less than or greater than) will be used with the null hypothesis, for the difference between the groups in the population will be set to zero. The major findings will expand the pedagogical approach that suggests pedagogy primarily exist in support of teacher-led learning, which has formed the pillars of traditional classroom teaching. But in light of the fourth industrial revolution, there is a fusion of learning platforms across the digital, physical, and biological worlds with disruptive technological advancements in areas such as the Internet of Things (IoT), artificial intelligence (AI), 3D printing, robotics, and others.Keywords: pedagogy, behaviorism, liberationism, flipping the classroom, servant leader instructor, agile scrum in education
Procedia PDF Downloads 142214 Simulation of Wet Scrubbers for Flue Gas Desulfurization
Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra
Abstract:
Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.Keywords: desulfurization, discrete phase, scrubber, wall film
Procedia PDF Downloads 264213 The Influence of Leadership Styles on Organizational Performance and Innovation: Empirical Study in Information Technology Sector in Spain
Authors: Richard Mababu Mukiur
Abstract:
Leadership is an important drive that plays a key role in the success and development of organizations, particularly in the current context of digital transformation, highly competitivity and globalization. Leaders are persons that hold a dominant and privileged position within an organization, field, or sector of activities and are able to manage, motivate and exercise a high degree of influence over other in order to achieve the institutional goals. They achieve commitment and engagement of others to embrace change, and to make good decisions. Leadership studies in higher education institutions have examined how effective leaders hold their organizations, and also to find approaches which fit best in the organizations context for its better management, transformation and improvement. Moreover, recent studies have highlighted the impact of leadership styles on organizational performance and innovation capacities, since some styles give better results than others. Effective leadership is part of learning process that take place through day-to-day tasks, responsibilities, and experiences that influence the organizational performance, innovation and engagement of employees. The adoption of appropriate leadership styles can improve organization results and encourage learning process, team skills and performance, and employees' motivation and engagement. In the case of case of Information Technology sector, leadership styles are particularly crucial since this sector is leading relevant changes and transformations in the knowledge society. In this context, the main objective of this study is to analyze managers leadership styles with their relation to organizational performance and innovation that may be mediated by learning organization process and demographic variables. Therefore, it was hypothesized that the transformational and transactional leadership will be the main style adopted in Information Technology sector and will influence organizational performance and innovation capacity. A sample of 540 participants from Information technology sector has been determined in order to achieve the objective of this study. The Multifactor Leadership Questionnaire was administered as the principal instrument, Scale of innovation and Learning Organization Questionnaire. Correlations and multiple regression analysis have been used as the main techniques of data analysis. The findings indicate that leadership styles have a relevant impact on organizational performance and innovation capacity. The transformational and transactional leadership are predominant styles in Information technology sector. The effective leadership style tend to be characterized by the capacity of generating and sharing knowledge that improve organization performance and innovation capacity. Managers are adopting and adapting their leadership styles that respond to the new organizational, social and cultural challenges and realities of contemporary society. Managers who encourage innovation, foster learning process, share experience are useful to the organization since they contribute to its development and transformation. Learning process capacity and demographic variables (age, gender, and job tenure) mediate the relationship between leadership styles, innovation capacity and organizational performance. The transformational and transactional leadership tend to enhance the organizational performance due to their significant impact on team-building, employees' engagement and satisfaction. Some practical implications and future lines of research have been proposed.Keywords: leadership styles, tranformational leadership, organisational performance, organisational innovation
Procedia PDF Downloads 218212 Gear Fault Diagnosis Based on Optimal Morlet Wavelet Filter and Autocorrelation Enhancement
Authors: Mohamed El Morsy, Gabriela Achtenová
Abstract:
Condition monitoring is used to increase machinery availability and machinery performance, whilst reducing consequential damage, increasing machine life, reducing spare parts inventories, and reducing breakdown maintenance. An efficient condition monitoring system provides early warning of faults by predicting them at an early stage. When a localized fault occurs in gears, the vibration signals always exhibit non-stationary behavior. The periodic impulsive feature of the vibration signal appears in the time domain and the corresponding gear mesh frequency (GMF) emerges in the frequency domain. However, one limitation of frequency-domain analysis is its inability to handle non-stationary waveform signals, which are very common when machinery faults occur. Particularly at the early stage of gear failure, the GMF contains very little energy and is often overwhelmed by noise and higher-level macro-structural vibrations. An effective signal processing method would be necessary to remove such corrupting noise and interference. In this paper, a new hybrid method based on optimal Morlet wavelet filter and autocorrelation enhancement is presented. First, to eliminate the frequency associated with interferential vibrations, the vibration signal is filtered with a band-pass filter determined by a Morlet wavelet whose parameters are selected or optimized based on maximum Kurtosis. Then, to further reduce the residual in-band noise and highlight the periodic impulsive feature, an autocorrelation enhancement algorithm is applied to the filtered signal. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers induce a load on the output joint shaft flanges. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. The gearbox used for experimental measurements is of the type most commonly used in modern small to mid-sized passenger cars with transversely mounted powertrain and front wheel drive: a five-speed gearbox with final drive gear and front wheel differential. The results obtained from practical experiments prove that the proposed method is very effective for gear fault diagnosis.Keywords: wavelet analysis, pitted gear, autocorrelation, gear fault diagnosis
Procedia PDF Downloads 388211 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing
Authors: Yohann R. J. Thomas, Sébastien Solan
Abstract:
Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes
Procedia PDF Downloads 251210 Composing Method of Decision-Making Function for Construction Management Using Active 4D/5D/6D Objects
Authors: Hyeon-Seung Kim, Sang-Mi Park, Sun-Ju Han, Leen-Seok Kang
Abstract:
As BIM (Building Information Modeling) application continually expands, the visual simulation techniques used for facility design and construction process information are becoming increasingly advanced and diverse. For building structures, BIM application is design - oriented to utilize 3D objects for conflict management, whereas for civil engineering structures, the usability of nD object - oriented construction stage simulation is important in construction management. Simulations of 5D and 6D objects, for which cost and resources are linked along with process simulation in 4D objects, are commonly used, but they do not provide a decision - making function for process management problems that occur on site because they mostly focus on the visual representation of current status for process information. In this study, an nD CAD system is constructed that facilitates an optimized schedule simulation that minimizes process conflict, a construction duration reduction simulation according to execution progress status, optimized process plan simulation according to project cost change by year, and optimized resource simulation for field resource mobilization capability. Through this system, the usability of conventional simple simulation objects is expanded to the usability of active simulation objects with which decision - making is possible. Furthermore, to close the gap between field process situations and planned 4D process objects, a technique is developed to facilitate a comparative simulation through the coordinated synchronization of an actual video object acquired by an on - site web camera and VR concept 4D object. This synchronization and simulation technique can also be applied to smartphone video objects captured in the field in order to increase the usability of the 4D object. Because yearly project costs change frequently for civil engineering construction, an annual process plan should be recomposed appropriately according to project cost decreases/increases compared with the plan. In the 5D CAD system provided in this study, an active 5D object utilization concept is introduced to perform a simulation in an optimized process planning state by finding a process optimized for the changed project cost without changing the construction duration through a technique such as genetic algorithm. Furthermore, in resource management, an active 6D object utilization function is introduced that can analyze and simulate an optimized process plan within a possible scope of moving resources by considering those resources that can be moved under a given field condition, instead of using a simple resource change simulation by schedule. The introduction of an active BIM function is expected to increase the field utilization of conventional nD objects.Keywords: 4D, 5D, 6D, active BIM
Procedia PDF Downloads 276209 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal
Authors: C. Bateira, J. Fernandes, A. Costa
Abstract:
The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards
Procedia PDF Downloads 177208 Global Supply Chain Tuning: Role of National Culture
Authors: Aleksandr S. Demin, Anastasiia V. Ivanova
Abstract:
Purpose: The current economy tends to increase the influence of digital technologies and diminish the human role in management. However, it is impossible to deny that a person still leads a business with its own set of values and priorities. The article presented aims to incorporate the peculiarities of the national culture and the characteristics of the supply chain using the quantitative values of the national culture obtained by the scholars of comparative management (Hofstede, House, and others). Design/Methodology/Approach: The conducted research is based on the secondary data in the field of cross-country comparison achieved by Prof. Hofstede and received in the GLOBE project. The data mentioned are used to design different aspects of the supply chain both on the cross-functional and inter-organizational levels. The connection between a range of principles in general (roles assignment, customer service prioritization, coordination of supply chain partners) and in comparative management (acknowledgment of the national peculiarities of the country in which the company operates) is shown over economic and mathematical models, mainly linear programming models. Findings: The combination of the team management wheel concept, the business processes of the global supply chain, and the national culture characteristics let a transnational corporation to form a supply chain crew balanced in costs, functions, and personality. To elaborate on an effective customer service policy and logistics strategy in goods and services distribution in the country under review, two approaches are offered. The first approach relies exceptionally on the customer’s interest in the place of operation, while the second one takes into account the position of the transnational corporation and its previous experience in order to accord both organizational and national cultures. The effect of integration practice on the achievement of a specific supply chain goal in a specific location is advised to assess via types of correlation (positive, negative, non) and the value of national culture indices. Research Limitations: The models developed are intended to be used by transnational companies and business forms located in several nationally different areas. Some of the inputs to illustrate the application of the methods offered are simulated. That is why the numerical measurements should be used with caution. Practical Implications: The research can be of great interest for the supply chain managers who are responsible for the engineering of global supply chains in a transnational corporation and the further activities in doing business on the international area. As well, the methods, tools, and approaches suggested can be used by top managers searching for new ways of competitiveness and can be suitable for all staff members who are keen on the national culture traits topic. Originality/Value: The elaborated methods of decision-making with regard to the national environment suggest the mathematical and economic base to find a comprehensive solution.Keywords: logistics integration, logistics services, multinational corporation, national culture, team management, service policy, supply chain management
Procedia PDF Downloads 106