Search results for: proposed drought severity index
2389 An Eulerian Method for Fluid-Structure Interaction Simulation Applied to Wave Damping by Elastic Structures
Authors: Julien Deborde, Thomas Milcent, Stéphane Glockner, Pierre Lubin
Abstract:
A fully Eulerian method is developed to solve the problem of fluid-elastic structure interactions based on a 1-fluid method. The interface between the fluid and the elastic structure is captured by a level set function, advected by the fluid velocity and solved with a WENO 5 scheme. The elastic deformations are computed in an Eulerian framework thanks to the backward characteristics. We use the Neo Hookean or Mooney Rivlin hyperelastic models and the elastic forces are incorporated as a source term in the incompressible Navier-Stokes equations. The velocity/pressure coupling is solved with a pressure-correction method and the equations are discretized by finite volume schemes on a Cartesian grid. The main difficulty resides in that large deformations in the fluid cause numerical instabilities. In order to avoid these problems, we use a re-initialization process for the level set and linear extrapolation of the backward characteristics. First, we verify and validate our approach on several test cases, including the benchmark of FSI proposed by Turek. Next, we apply this method to study the wave damping phenomenon which is a mean to reduce the waves impact on the coastline. So far, to our knowledge, only simulations with rigid or one dimensional elastic structure has been studied in the literature. We propose to place elastic structures on the seabed and we present results where 50 % of waves energy is absorbed.Keywords: damping wave, Eulerian formulation, finite volume, fluid structure interaction, hyperelastic material
Procedia PDF Downloads 3232388 Single Atom Manipulation with 4 Scanning Tunneling Microscope Technique
Authors: Jianshu Yang, Delphine Sordes, Marek Kolmer, Christian Joachim
Abstract:
Nanoelectronics, for example the calculating circuits integrating at molecule scale logic gates, atomic scale circuits, has been constructed and investigated recently. A major challenge is their functional properties characterization because of the connecting problem from atomic scale to micrometer scale. New experimental instruments and new processes have been proposed therefore. To satisfy a precisely measurement at atomic scale and then connecting micrometer scale electrical integration controller, the technique improvement is kept on going. Our new machine, a low temperature high vacuum four scanning tunneling microscope, as a customer required instrument constructed by Omicron GmbH, is expected to be scaling down to atomic scale characterization. Here, we will present our first testified results about the performance of this new instrument. The sample we selected is Au(111) surface. The measurements have been taken at 4.2 K. The atomic resolution surface structure was observed with each of four scanners with noise level better than 3 pm. With a tip-sample distance calibration by I-z spectra, the sample conductance has been derived from its atomic locally I-V spectra. Furthermore, the surface conductance measurement has been performed using two methods, (1) by landing two STM tips on the surface with sample floating; and (2) by sample floating and one of the landed tips turned to be grounding. In addition, single atom manipulation has been achieved with a modified tip design, which is comparable to a conventional LT-STM.Keywords: low temperature ultra-high vacuum four scanning tunneling microscope, nanoelectronics, point contact, single atom manipulation, tunneling resistance
Procedia PDF Downloads 2802387 Efficient Estimation for the Cox Proportional Hazards Cure Model
Authors: Khandoker Akib Mohammad
Abstract:
While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood
Procedia PDF Downloads 1442386 Practical Method for Failure Prediction of Mg Alloy Sheets during Warm Forming Processes
Authors: Sang-Woo Kim, Young-Seon Lee
Abstract:
An important concern in metal forming, even at elevated temperatures, is whether a desired deformation can be accomplished without any failure of the material. A detailed understanding of the critical condition for crack initiation provides not only the workability limit of a material but also a guide-line for process design. This paper describes the utilization of ductile fracture criteria in conjunction with the finite element method (FEM) for predicting the onset of fracture in warm metal working processes of magnesium alloy sheets. Critical damage values for various ductile fracture criteria were determined from uniaxial tensile tests and were expressed as the function of strain rate and temperature. In order to find the best criterion for failure prediction, Erichsen cupping tests under isothermal conditions and FE simulations combined with ductile fracture criteria were carried out. Based on the plastic deformation histories obtained from the FE analyses of the Erichsen cupping tests and the critical damage value curves, the initiation time and location of fracture were predicted under a bi-axial tensile condition. The results were compared with experimental results and the best criterion was recommended. In addition, the proposed methodology was used to predict the onset of fracture in non-isothermal deep drawing processes using an irregular shaped blank, and the results were verified experimentally.Keywords: magnesium, AZ31 alloy, ductile fracture, FEM, sheet forming, Erichsen cupping test
Procedia PDF Downloads 3732385 A Framework Based on Dempster-Shafer Theory of Evidence Algorithm for the Analysis of the TV-Viewers’ Behaviors
Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi
Abstract:
In this paper, we propose an approach of detecting the behavior of the viewers of a TV program in a non-controlled environment. The experiment we propose is based on the use of three types of connected objects (smartphone, smart watch, and a connected remote control). 23 participants were observed while watching their TV programs during three phases: before, during and after watching a TV program. Their behaviors were detected using an approach based on The Dempster Shafer Theory (DST) in two phases. The first phase is to approximate dynamically the mass functions using an approach based on the correlation coefficient. The second phase is to calculate the approximate mass functions. To approximate the mass functions, two approaches have been tested: the first approach was to divide each features data space into cells; each one has a specific probability distribution over the behaviors. The probability distributions were computed statistically (estimated by empirical distribution). The second approach was to predict the TV-viewing behaviors through the use of classifiers algorithms and add uncertainty to the prediction based on the uncertainty of the model. Results showed that mixing the fusion rule with the computation of the initial approximate mass functions using a classifier led to an overall of 96%, 95% and 96% success rate for the first, second and third TV-viewing phase respectively. The results were also compared to those found in the literature. This study aims to anticipate certain actions in order to maintain the attention of TV viewers towards the proposed TV programs with usual connected objects, taking into account the various uncertainties that can be generated.Keywords: Iot, TV-viewing behaviors identification, automatic classification, unconstrained environment
Procedia PDF Downloads 2292384 A Multi-Science Study of Modern Synergetic War and Its Information Security Component
Authors: Alexander G. Yushchenko
Abstract:
From a multi-science point of view, we analyze threats to security resulting from globalization of international information space and information and communication aggression of Russia. A definition of Ruschism is formulated as an ideology supporting aggressive actions of modern Russia against the Euro-Atlantic community. Stages of the hybrid war Russia is leading against Ukraine are described, including the elements of subversive activity of the special services, the activation of the military phase and the gradual shift of the focus of confrontation to the realm of information and communication technologies. We reveal an emergence of a threat for democratic states resulting from the destabilizing impact of a target state’s mass media and social networks being exploited by Russian secret services under freedom-of-speech disguise. Thus, we underline the vulnerability of cyber- and information security of the network society in regard of hybrid war. We propose to define the latter a synergetic war. Our analysis is supported with a long-term qualitative monitoring of representation of top state officials on popular TV channels and Facebook. From the memetics point of view, we have detected a destructive psycho-information technology used by the Kremlin, a kind of information catastrophe, the essence of which is explained in detail. In the conclusion, a comprehensive plan for information protection of the public consciousness and mentality of Euro-Atlantic citizens from the aggression of the enemy is proposed.Keywords: cyber and information security, hybrid war, psycho-information technology, synergetic war, Ruschism
Procedia PDF Downloads 1342383 Phosphate Sludge Ceramics: Effects of Firing Cycle Parameters on Technological Properties and Ceramic Suitability
Authors: Mohamed Loutou, Mohamed Hajjaji, Mohamed Ait Babram, Mohammed Mansori, Rachid Hakkou, Claude Favotto
Abstract:
More than 26,4 million tons of phosphates are produced by the phosphates industries in Morocco (2010), generating huge amounts of sludge by flocculation during the ore beneficiation. They way are stored at the end of the process in open air ponds. Its accumulation and storage may have an impact on several scales such as ground water and human being. For this purpose, an efficient way to use it the field of the ceramic is proposed. The as received sludge and a clay-rich sediment have been studied in terms of chemical, mineralogical and micro-structural side using various analytical methods. Several formulations have been performed by mixing the sludge with the binder shaped in the form of granules. After being dried at 105 °C, the samples were heated in the range of 900-1200 °C. As well as the ceramic properties (firing shrinkage, water absorption, total porosity and compressive strength) the micro structure has been investigated using X-ray diffraction, scanning electron microscopy and Fourier transform infrared spectroscopy. The relations between properties and the operating factors were formulated using the design of experiments (DOE). Gehlenite was the only phase neo-formed in the sintering samples. SEM micrographs revealed the presence of nano metric stains. Based on RSM results, all factors had positive effects on Firing shrinkage, compressive strength and total porosity. However, they manifested opposite effects on density and water absorption.Keywords: phosphate sludge, clay, ceramic properties, granule
Procedia PDF Downloads 5052382 Systematic Examination of Methods Supporting the Social Innovation Process
Authors: Mariann Veresne Somosi, Zoltan Nagy, Krisztina Varga
Abstract:
Innovation is the key element of economic development and a key factor in social processes. Technical innovations can be identified as prerequisites and causes of social change and cannot be created without the renewal of society. The study of social innovation can be characterised as one of the significant research areas of our day. The study’s aim is to identify the process of social innovation, which can be defined by input, transformation, and output factors. This approach divides the social innovation process into three parts: situation analysis, implementation, follow-up. The methods associated with each stage of the process are illustrated by the chronological line of social innovation. In this study, we have sought to present methodologies that support long- and short-term decision-making that is easy to apply, have different complementary content, and are well visualised for different user groups. When applying the methods, the reference objects are different: county, district, settlement, specific organisation. The solution proposed by the study supports the development of a methodological combination adapted to different situations. Having reviewed metric and conceptualisation issues, we wanted to develop a methodological combination along with a change management logic suitable for structured support to the generation of social innovation in the case of a locality or a specific organisation. In addition to a theoretical summary, in the second part of the study, we want to give a non-exhaustive picture of the two counties located in the north-eastern part of Hungary through specific analyses and case descriptions.Keywords: factors of social innovation, methodological combination, social innovation process, supporting decision-making
Procedia PDF Downloads 1552381 Hydrodynamic and Sediment Transport Analysis of Computational Fluid Dynamics Designed Flow Regulating Liner (Smart Ditch)
Authors: Saman Mostafazadeh-Fard, Zohrab Samani, Kenneth Suazo
Abstract:
Agricultural ditch liners are used to prevent soil erosion and reduce seepage losses. This paper introduced an approach to validate a computational fluid dynamics (CFD) platform FLOW-3D code and its use to design a flow-regulating corrugated agricultural ditch liner system (Smart Ditch (SM)). Hydrodynamic and sediment transport analyses were performed on the proposed liner flow using the CFD platform FLOW-3D code. The code's hydrodynamic and scour and sediment transport models were calibrated and validated using lab data with an accuracy of 94 % and 95%, respectively. The code was then used to measure hydrodynamic parameters of sublayer turbulent intensity, kinetic energy, dissipation, and packed sediment mass normalized with respect to sublayer flow velocity. Sublayer turbulent intensity, kinetic energy, and dissipation in the SM flow were significantly higher than CR flow. An alternative corrugated liner was also designed, and sediment transport was measured and compared to SM and CR flows. Normalized packed sediment mass with respect to average sublayer flow velocity was 27.8 % lower in alternative flow compared to SM flow. CFD platform FLOW-3D code could effectively be used to design corrugated ditch liner systems and perform hydrodynamic and sediment transport analysis under various corrugation designs.Keywords: CFD, hydrodynamic, sediment transport, ditch, liner design
Procedia PDF Downloads 1222380 Applying Serious Game Design Frameworks to Existing Games for Integration of Custom Learning Objectives
Authors: Jonathan D. Moore, Mark G. Reith, David S. Long
Abstract:
Serious games (SGs) have been shown to be an effective teaching tool in many contexts. Because of the success of SGs, several design frameworks have been created to expedite the process of making original serious games to teach specific learning objectives (LOs). Even with these frameworks, the time required to create a custom SG from conception to implementation can range from months to years. Furthermore, it is even more difficult to design a game framework that allows an instructor to create customized game variants supporting multiple LOs within the same field. This paper proposes a refactoring methodology to apply the theoretical principles from well-established design frameworks to a pre-existing serious game. The expected result is a generalized game that can be quickly customized to teach LOs not originally targeted by the game. This methodology begins by describing the general components in a game, then uses a combination of two SG design frameworks to extract the teaching elements present in the game. The identified teaching elements are then used as the theoretical basis to determine the range of LOs that can be taught by the game. This paper evaluates the proposed methodology by presenting a case study of refactoring the serious game Battlespace Next (BSN) to teach joint military capabilities. The range of LOs that can be taught by the generalized BSN are identified, and examples of creating custom LOs are given. Survey results from users of the generalized game are also provided. Lastly, the expected impact of this work is discussed and a road map for future work and evaluation is presented.Keywords: serious games, learning objectives, game design, learning theory, game framework
Procedia PDF Downloads 1152379 Mobile Schooling for the Most Vulnerable Children on the Street: An Innovation
Authors: Md. Shakhawat Ullah Chowdhury
Abstract:
Mobile school is an innovative methodology in non-formal education to increase access to education for children during conflict through theatre for education for appropriate basic education to children during conflict. The continuous exposure to harsh environments and the nature of the lifestyles of children in conflict make them vulnerable. However, the mobile school initiative takes into consideration the mobile lifestyle of children in conflict. Schools are provided considering the pocket area of the street children with portable chalkboards, tin of books and materials as communities move. Teaching is multi-grade to ensure all children in the community benefit. The established mobile schools, while focused on basic literacy and numeracy skills according to traditions of the communities. The school teachers are selected by the community and trained by a theatre activist. These teachers continue to live and move with the community and provide continuous education for children in conflict. The model proposed a holistic team work to deliver education focused services to the street children’s pocket area where the team is mobile. The team consists of three members –an educator (theatre worker), a psychological counsellor and paramedics. The mobile team is responsible to educate street children and also play dramas which specially produce on the basis of national curriculum and awareness issues for street children. Children enjoy play and learn about life skills and basic literacy and numeracy skills which may be a pillar of humanitarian aid during conflict.Keywords: vulnerable, children in conflict, mobile schooling, child-friendly
Procedia PDF Downloads 4332378 Enhanced Magnetoelastic Response near Morphotropic Phase Boundary in Ferromagnetic Materials: Experimental and Theoretical Analysis
Authors: Murtaza Adil, Sen Yang, Zhou Chao, Song Xiaoping
Abstract:
The morphotropic phase boundary (MPB) recently has attracted constant interest in ferromagnetic systems for obtaining enhanced large magnetoelastic response. In the present study, structural and magnetoelastic properties of MPB involved ferromagnetic Tb1-xGdxFe2 (0≤x≤1) system has been investigated. The change of easy magnetic direction from <111> to <100> with increasing x up MPB composition of x=0.9 is detected by step-scanned [440] synchrotron X-ray diffraction reflections. The Gd substitution for Tb changes the composition for the anisotropy compensation near MPB composition of x=0.9, which was confirmed by the analysis of detailed scanned XRD, magnetization curves and the calculation of the first anisotropy constant K1. The spin configuration diagram accompanied with different crystal structures for Tb1-xGdxFe2 was designed. The calculated first anisotropy constant K1 shows a minimum value at MPB composition of x=0.9. In addition, the large ratio between magnetostriction, and the absolute values of the first anisotropy constant │λS∕K1│ appears at MPB composition, which makes it a potential material for magnetostrictive application. Based on experimental results, a theoretically approach was also proposed to signify that the facilitated magnetization rotation and enhanced magnetoelastic effect near MPB composition are a consequence of the anisotropic flattening of free energy of ferromagnetic crystal. Our work specifies the universal existence of MPB in ferromagnetic materials which is important for substantial improvement of magnetic and magnetostrictive properties and may provide a new route to develop advanced functional materials.Keywords: free energy, magnetic anisotropy, magnetostriction, morphotropic phase boundary (MPB)
Procedia PDF Downloads 2752377 Recent Advances in Pulse Width Modulation Techniques and Multilevel Inverters
Authors: Satish Kumar Peddapelli
Abstract:
This paper presents advances in pulse width modulation techniques which refers to a method of carrying information on train of pulses and the information be encoded in the width of pulses. Pulse Width Modulation is used to control the inverter output voltage. This is done by exercising the control within the inverter itself by adjusting the ON and OFF periods of inverter. By fixing the DC input voltage we get AC output voltage. In variable speed AC motors the AC output voltage from a constant DC voltage is obtained by using inverter. Recent developments in power electronics and semiconductor technology have lead improvements in power electronic systems. Hence, different circuit configurations namely multilevel inverters have become popular and considerable interest by researcher are given on them. A fast Space-Vector Pulse Width Modulation (SVPWM) method for five-level inverter is also discussed. In this method, the space vector diagram of the five-level inverter is decomposed into six space vector diagrams of three-level inverters. In turn, each of these six space vector diagrams of three-level inverter is decomposed into six space vector diagrams of two-level inverters. After decomposition, all the remaining necessary procedures for the three-level SVPWM are done like conventional two-level inverter. The proposed method reduces the algorithm complexity and the execution time. It can be applied to the multilevel inverters above the five-level also. The experimental setup for three-level diode-clamped inverter is developed using TMS320LF2407 DSP controller and the experimental results are analysed.Keywords: five-level inverter, space vector pulse wide modulation, diode clamped inverter, electrical engineering
Procedia PDF Downloads 3882376 Analytical Study: An M-Learning App Reflecting the Factors Affecting Student’s Adoption of M-Learning
Authors: Ahmad Khachan, Ahmet Ozmen
Abstract:
This study aims to introduce a mobile bite-sized learning concept, a mobile application with social networks motivation factors that will encourage students to practice critical thinking, improve analytical skills and learn knowledge sharing. We do not aim to propose another e-learning or distance learning based tool like Moodle and Edmodo; instead, we introduce a mobile learning tool called Interactive M-learning Application. The tool reconstructs and strengthens the bonds between educators and learners and provides a foundation for integrating mobile devices in education. The application allows learners to stay connected all the time, share ideas, ask questions and learn from each other. It is built on Android since the Android has the largest platform share in the world and is dominating the market with 74.45% share in 2018. We have chosen Google-Firebase server for hosting because of flexibility, ease of hosting and real time update capabilities. The proposed m-learning tool was offered to four groups of university students in different majors. An improvement in the relation between the students, the teachers and the academic institution was obvious. Student’s performance got much better added to better analytical and critical skills advancement and moreover a willingness to adopt mobile learning in class. We have also compared our app with another tool in the same class for clarity and reliability of the results. The student’s mobile devices were used in this experimental study for diversity of devices and platform versions.Keywords: education, engineering, interactive software, undergraduate education
Procedia PDF Downloads 1552375 A Framework for Chinese Domain-Specific Distant Supervised Named Entity Recognition
Abstract:
The Knowledge Graphs have now become a new form of knowledge representation. However, there is no consensus in regard to a plausible and definition of entities and relationships in the domain-specific knowledge graph. Further, in conjunction with several limitations and deficiencies, various domain-specific entities and relationships recognition approaches are far from perfect. Specifically, named entity recognition in Chinese domain is a critical task for the natural language process applications. However, a bottleneck problem with Chinese named entity recognition in new domains is the lack of annotated data. To address this challenge, a domain distant supervised named entity recognition framework is proposed. The framework is divided into two stages: first, the distant supervised corpus is generated based on the entity linking model of graph attention neural network; secondly, the generated corpus is trained as the input of the distant supervised named entity recognition model to train to obtain named entities. The link model is verified in the ccks2019 entity link corpus, and the F1 value is 2% higher than that of the benchmark method. The re-pre-trained BERT language model is added to the benchmark method, and the results show that it is more suitable for distant supervised named entity recognition tasks. Finally, it is applied in the computer field, and the results show that this framework can obtain domain named entities.Keywords: distant named entity recognition, entity linking, knowledge graph, graph attention neural network
Procedia PDF Downloads 952374 Monocular Depth Estimation Benchmarking with Thermal Dataset
Authors: Ali Akyar, Osman Serdar Gedik
Abstract:
Depth estimation is a challenging computer vision task that involves estimating the distance between objects in a scene and the camera. It predicts how far each pixel in the 2D image is from the capturing point. There are some important Monocular Depth Estimation (MDE) studies that are based on Vision Transformers (ViT). We benchmark three major studies. The first work aims to build a simple and powerful foundation model that deals with any images under any condition. The second work proposes a method by mixing multiple datasets during training and a robust training objective. The third work combines generalization performance and state-of-the-art results on specific datasets. Although there are studies with thermal images too, we wanted to benchmark these three non-thermal, state-of-the-art studies with a hybrid image dataset which is taken by Multi-Spectral Dynamic Imaging (MSX) technology. MSX technology produces detailed thermal images by bringing together the thermal and visual spectrums. Using this technology, our dataset images are not blur and poorly detailed as the normal thermal images. On the other hand, they are not taken at the perfect light conditions as RGB images. We compared three methods under test with our thermal dataset which was not done before. Additionally, we propose an image enhancement deep learning model for thermal data. This model helps extract the features required for monocular depth estimation. The experimental results demonstrate that, after using our proposed model, the performance of these three methods under test increased significantly for thermal image depth prediction.Keywords: monocular depth estimation, thermal dataset, benchmarking, vision transformers
Procedia PDF Downloads 322373 An Application of Fuzzy Analytical Network Process to Select a New Production Base: An AEC Perspective
Authors: Walailak Atthirawong
Abstract:
By the end of 2015, the Association of Southeast Asian Nations (ASEAN) countries proclaim to transform into the next stage of an economic era by having a single market and production base called ASEAN Economic Community (AEC). One objective of the AEC is to establish ASEAN as a single market and one production base making ASEAN highly competitive economic region and competitive with new mechanisms. As a result, it will open more opportunities to enterprises in both trade and investment, which offering a competitive market of US$ 2.6 trillion and over 622 million people. Location decision plays a key role in achieving corporate competitiveness. Hence, it may be necessary for enterprises to redesign their supply chains via enlarging a new production base which has low labor cost, high labor skill and numerous of labor available. This strategy will help companies especially for apparel industry in order to maintain a competitive position in the global market. Therefore, in this paper a generic model for location selection decision for Thai apparel industry using Fuzzy Analytical Network Process (FANP) is proposed. Myanmar, Vietnam and Cambodia are referred for alternative location decision from interviewing expert persons in this industry who have planned to enlarge their businesses in AEC countries. The contribution of this paper lies in proposing an approach model that is more practical and trustworthy to top management in making a decision on location selection.Keywords: apparel industry, ASEAN Economic Community (AEC), Fuzzy Analytical Network Process (FANP), location decision
Procedia PDF Downloads 2362372 Factors that Predict Pre-Service Teachers' Decision to Integrate E-Learning: A Structural Equation Modeling (SEM) Approach
Authors: Mohd Khairezan Rahmat
Abstract:
Since the impetus of becoming a develop country by the year 2020, the Malaysian government have been proactive in strengthening the integration of ICT into the national educational system. Teacher-education programs have the responsibility to prepare the nation future teachers by instilling in them the desire, confidence, and ability to fully utilized the potential of ICT into their instruction process. In an effort to fulfill this responsibility, teacher-education program are beginning to create alternatives means for preparing cutting-edge teachers. One of the alternatives is the student’s learning portal. In line with this mission, this study investigates the Faculty of Education, University Teknologi MARA (UiTM) pre-service teachers’ perception of usefulness, attitude, and ability toward the usage of the university learning portal, known as iLearn. The study also aimed to predict factors that might hinder the pre-service teachers’ decision to used iLearn as their platform in learning. The Structural Equation Modeling (SEM), was employed in analyzed the survey data. The suggested findings informed that pre-service teacher’s successful integration of the iLearn was highly influenced by their perception of usefulness of the system. The findings also suggested that the more familiar the pre-service teacher with the iLearn, the more possibility they will use the system. In light of similar study, the present findings hope to highlight the important to understand the user’s perception toward any proposed technology.Keywords: e-learning, prediction factors, pre-service teacher, structural equation modeling (SEM)
Procedia PDF Downloads 3392371 A Data Driven Methodological Approach to Economic Pre-Evaluation of Reuse Projects of Ancient Urban Centers
Authors: Pietro D'Ambrosio, Roberta D'Ambrosio
Abstract:
The upgrading of the architectural and urban heritage of the urban historic centers almost always involves the planning for the reuse and refunctionalization of the structures. Such interventions have complexities linked to the need to take into account the urban and social context in which the structure and its intrinsic characteristics such as historical and artistic value are inserted. To these, of course, we have to add the need to make a preliminary estimate of recovery costs and more generally to assess the economic and financial sustainability of the whole project of re-socialization. Particular difficulties are encountered during the pre-assessment of costs since it is often impossible to perform analytical surveys and structural tests for both structural conditions and obvious cost and time constraints. The methodology proposed in this work, based on a multidisciplinary and data-driven approach, is aimed at obtaining, at very low cost, reasonably priced economic evaluations of the interventions to be carried out. In addition, the specific features of the approach used, derived from the predictive analysis techniques typically applied in complex IT domains (big data analytics), allow to obtain as a result indirectly the evaluation process of a shared database that can be used on a generalized basis to estimate such other projects. This makes the methodology particularly indicated in those cases where it is expected to intervene massively across entire areas of historical city centers. The methodology has been partially tested during a study aimed at assessing the feasibility of a project for the reuse of the monumental complex of San Massimo, located in the historic center of Salerno, and is being further investigated.Keywords: evaluation, methodology, restoration, reuse
Procedia PDF Downloads 1872370 Immunoinformatic Design and Evaluation of an Epitope-Based Tetravalent Vaccine against Human Hand, Foot, and Mouth Disease
Authors: Aliyu Maje Bello, Yaowaluck Maprang Roshorm
Abstract:
Hand, foot, and mouth disease (HFMD) is a highly contagious viral infection affecting mostly infants and children. Although the Enterovirus A71 (EV71) is usually the major causative agent of HFMD, other enteroviruses such as coxsackievirus A16, A10, and A6 are also found in some of the recent outbreaks. The commercially available vaccines have demonstrated their effectiveness against only EV71 infection but no protection against other enteroviruses. To address the limitation of the monovalent EV71 vaccine, the present study thus designed a tetravalent vaccine against the four major enteroviruses causing HFMD and primarily evaluated the designed vaccine using an immunoinformatics approach. The immunogen was designed to contain the EV71 VP1 protein and multiple reported epitopes from all four distinct enteroviruses and thus designated a tetravalent vaccine. The 3D structure of the designed tetravalent vaccine was modeled, refined, and validated. Epitope screening showed the presence of B-cell, CTL, CD4 T cell, and IFN epitopes with vast application among the Asian population. Docking analysis confirmed the stable and strong binding interactions between the immunogen and immune receptor B-cell receptor (BCR). In silico cloning and immune simulation analyses guaranteed high efficiency and sufficient expression of the vaccine candidate in humans. Overall, the promising results obtained from the in-silico studies of the proposed tetravalent vaccine make it a potential candidate worth further experimental validation.Keywords: enteroviruses, coxsackieviruses, hand foot and mouth disease, immunoinformatics, tetravalent vaccine
Procedia PDF Downloads 722369 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques
Authors: Tomas Trainys, Algimantas Venckauskas
Abstract:
Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.
Procedia PDF Downloads 1502368 Signals Affecting Crowdfunding Success for Australian Social Enterprises
Authors: Mai Yen Nhi Doan, Viet Le, Chamindika Weerakoon
Abstract:
Social enterprises have emerged as sustainable organisations that deliver social achievement along with long-term financial advancement. However, recorded financial barriers have urged social enterprises to divert to other financing methods due to the misaligned ideology with traditional financing capitalists, in which crowdfunding can be a promising alternative. Previous studies in crowdfunding have inadequately addressed crowdfunding for social enterprises, with conflicting results due to the unsuitable analysis of signals in isolation rather than in combinations, using the data from platforms that do not support social enterprises. Extending the signalling theory, this study suggests that crowdfunding success results from the collaboration between costly and costless signals. The proposed conceptual framework enlightens the interaction between costly signals as “organisational information”, “social entrepreneur’s credibility,” and “third-party endorsement” and costless signals as various sub-signals under the “campaign preparedness” signal to achieve crowdfunding success. Using Qualitative Comparative Analysis, this study examined 45 crowdfunding campaigns run by Australian social enterprises on StartSomeGood and Chuffed. The analysis found that different combinations of costly and costless signals can lead to crowdfunding success, allowing social enterprises to adopt suitable combinations of signals to their context. Costless signal – campaign preparedness is fundamental for success, though different costless sub-signals under campaign preparedness can interact with different costly signals for the desired outcome. Third-party endorsement signal was found to be the necessary signal for crowdfunding success for Australian social enterprises.Keywords: crowdfunding, qualitative comparative analysis (QCA), signalling theory, social enterprises
Procedia PDF Downloads 1032367 Detection and Classification of Mammogram Images Using Principle Component Analysis and Lazy Classifiers
Authors: Rajkumar Kolangarakandy
Abstract:
Feature extraction and selection is the primary part of any mammogram classification algorithms. The choice of feature, attribute or measurements have an important influence in any classification system. Discrete Wavelet Transformation (DWT) coefficients are one of the prominent features for representing images in frequency domain. The features obtained after the decomposition of the mammogram images using wavelet transformations have higher dimension. Even though the features are higher in dimension, they were highly correlated and redundant in nature. The dimensionality reduction techniques play an important role in selecting the optimum number of features from the higher dimension data, which are highly correlated. PCA is a mathematical tool that reduces the dimensionality of the data while retaining most of the variation in the dataset. In this paper, a multilevel classification of mammogram images using reduced discrete wavelet transformation coefficients and lazy classifiers is proposed. The classification is accomplished in two different levels. In the first level, mammogram ROIs extracted from the dataset is classified as normal and abnormal types. In the second level, all the abnormal mammogram ROIs is classified into benign and malignant too. A further classification is also accomplished based on the variation in structure and intensity distribution of the images in the dataset. The Lazy classifiers called Kstar, IBL and LWL are used for classification. The classification results obtained with the reduced feature set is highly promising and the result is also compared with the performance obtained without dimension reduction.Keywords: PCA, wavelet transformation, lazy classifiers, Kstar, IBL, LWL
Procedia PDF Downloads 3352366 The Effects of Self-Efficacy on Challenge and Threat States
Authors: Nadine Sammy, Mark Wilson, Samuel Vine
Abstract:
The Theory of Challenge and Threat States in Athletes (TCTSA) states that self-efficacy is an antecedent of challenge and threat. These states result from conscious and unconscious evaluations of situational demands and personal resources and are represented by both cognitive and physiological markers. Challenge is considered a more adaptive stress response as it is associated with a more efficient cardiovascular profile, as well as better performance and attention effects compared with threat. Self-efficacy is proposed to influence challenge/threat because an individual’s belief that they have the skills necessary to execute the courses of action required to succeed contributes to a perception that they can cope with the demands of the situation. This study experimentally examined the effects of self-efficacy on cardiovascular responses (challenge and threat), demand and resource evaluations, performance and attention under pressurised conditions. Forty-five university students were randomly assigned to either a control (n=15), low self-efficacy (n=15) or high self-efficacy (n=15) group and completed baseline and pressurised golf putting tasks. Self-efficacy was manipulated using false feedback adapted from previous studies. Measures of self-efficacy, cardiovascular reactivity, demand and resource evaluations, task performance and attention were recorded. The high self-efficacy group displayed more favourable cardiovascular reactivity, indicative of a challenge state, compared with the low self-efficacy group. The former group also reported high resource evaluations, but no task performance or attention effects were detected. These findings demonstrate that levels of self-efficacy influence cardiovascular reactivity and perceptions of resources under pressurised conditions.Keywords: cardiovascular, challenge, performance, threat
Procedia PDF Downloads 2322365 Design, Fabrication, and Study of Droplet Tube Based Triboelectric Nanogenerators
Authors: Yana Xiao
Abstract:
The invention of Triboelectric Nanogenerators (TENGs) provides an effective approach to the sustainable power of energy. Liquid-solid interfaces-based TENGs have been researched in virtue of less friction for harvesting energy from raindrops, rivers, and oceans in the form of water flows. However, TENGs based on droplets have rarely been investigated. In this study, we have proposed a new kind of droplet tube-based TENG (DT-TENG) with free-standing and reformative grating electrodes. Both straight and curved DT-TENGs were designed, fabricated, and evaluated, including straight tubes TENG with 27 electrodes and curved tubes TENG of 25cm radius curvature- at the inclination of 30°, 45° and 60° respectively. Different materials and hydrophobicity treatments for the tubes have also been studied, together with a discussion on the mechanism and applications of DT-TENGs. As different types of liquid discrepant energy performance, this kind of DT-TENG can be potentially used in laboratories to identify liquid or solvent. In addition, a smart fishing float is contrived, which can recognize different levels of movement speeds brought about by different weights and generate corresponding electric signals to remind the angler. The electric generation performance when using a PVC helix tube around a cylinder is similar in straight situations under the inclination of 45° in this experiment. This new structure changes the direction of a water drop or flows without losing kinetic energy, which makes utilizing Helix-Tube-TENG to harvest energy from different building morphologies possible.Keywords: triboelectric nanogenerator, energy harvest, liquid tribomaterial, structure innovation
Procedia PDF Downloads 902364 Hybrid Intelligent Optimization Methods for Optimal Design of Horizontal-Axis Wind Turbine Blades
Authors: E. Tandis, E. Assareh
Abstract:
Designing the optimal shape of MW wind turbine blades is provided in a number of cases through evolutionary algorithms associated with mathematical modeling (Blade Element Momentum Theory). Evolutionary algorithms, among the optimization methods, enjoy many advantages, particularly in stability. However, they usually need a large number of function evaluations. Since there are a large number of local extremes, the optimization method has to find the global extreme accurately. The present paper introduces a new population-based hybrid algorithm called Genetic-Based Bees Algorithm (GBBA). This algorithm is meant to design the optimal shape for MW wind turbine blades. The current method employs crossover and neighborhood searching operators taken from the respective Genetic Algorithm (GA) and Bees Algorithm (BA) to provide a method with good performance in accuracy and speed convergence. Different blade designs, twenty-one to be exact, were considered based on the chord length, twist angle and tip speed ratio using GA results. They were compared with BA and GBBA optimum design results targeting the power coefficient and solidity. The results suggest that the final shape, obtained by the proposed hybrid algorithm, performs better compared to either BA or GA. Furthermore, the accuracy and speed convergence increases when the GBBA is employedKeywords: Blade Design, Optimization, Genetic Algorithm, Bees Algorithm, Genetic-Based Bees Algorithm, Large Wind Turbine
Procedia PDF Downloads 3162363 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 1892362 A Gradient Orientation Based Efficient Linear Interpolation Method
Authors: S. Khan, A. Khan, Abdul R. Soomrani, Raja F. Zafar, A. Waqas, G. Akbar
Abstract:
This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges.Keywords: edge detection, gradient orientation, image upscaling, linear interpolation, slope tracing
Procedia PDF Downloads 2612361 Is Materiality Determination the Key to Integrating Corporate Sustainability and Maximising Value?
Authors: Ruth Hegarty, Noel Connaughton
Abstract:
Sustainability reporting has become a priority for many global multinational companies. This is associated with ever-increasing expectations from key stakeholders for companies to be transparent about their strategies, activities and management with regard to sustainability issues. The Global Reporting Initiative (GRI) encourages reporters to only provide information on the issues that are really critical in order to achieve the organisation’s goals for sustainability and manage its impact on environment and society. A key challenge for most reporting organisations is how to identify relevant issues for sustainability reporting and prioritise those material issues in accordance with company and stakeholder needs. A recent study indicates that most of the largest companies listed on the world’s stock exchanges are failing to provide data on key sustainability indicators such as employee turnover, energy, greenhouse gas emissions (GHGs), injury rate, pay equity, waste and water. This paper takes an indepth look at the approaches used by a select number of international sustainability leader corporates to identify key sustainability issues. The research methodology involves performing a detailed analysis of the sustainability report content of up to 50 companies listed on the 2014 Dow Jones Sustainability Indices (DJSI). The most recent sustainability report content found on the GRI Sustainability Disclosure Database is then compared with 91 GRI Specific Standard Disclosures and a small number of GRI Standard Disclosures. Preliminary research indicates significant gaps in the information disclosed in corporate sustainability reports versus the indicator content specified in the GRI Content Index. The following outlines some of the key findings to date: Most companies made a partial disclosure with regard to the Economic indicators of climate change risks and infrastructure investments, but did not focus on the associated negative impacts. The top Environmental indicators disclosed were energy consumption and reductions, GHG emissions, water withdrawals, waste and compliance. The lowest rates of indicator disclosure included biodiversity, water discharge, mitigation of environmental impacts of products and services, transport, environmental investments, screening of new suppliers and supply chain impacts. The top Social indicators disclosed were new employee hires, rates of injury, freedom of association in operations, child labour and forced labour. Lesser disclosure rates were reported for employee training, composition of governance bodies and employees, political contributions, corruption and fines for non-compliance. The reporting on most other Social indicators was found to be poor. In addition, most companies give only a brief explanation on how material issues are defined, identified and ranked. Data on the identification of key stakeholders and the degree and nature of engagement for determining issues and their weightings is also lacking. Generally, little to no data is provided on the algorithms used to score an issue. Research indicates that most companies lack a rigorous and thorough methodology to systematically determine the material issues of sustainability reporting in accordance with company and stakeholder needs.Keywords: identification of key stakeholders, material issues, sustainability reporting, transparency
Procedia PDF Downloads 3072360 Optimisation of Pin Fin Heat Sink Using Taguchi Method
Authors: N. K. Chougule, G. V. Parishwad
Abstract:
The pin fin heat sink is a novel heat transfer device to transfer large amount of heat through with very small temperature differences and it also possesses large uniform cooling characteristics. Pin fins are widely used as elements that provide increased cooling for electronic devices. Increasing demands regarding the performance of such devices can be observed due to the increasing heat production density of electronic components. For this reason, extensive work is being carried out to select and optimize pin fin elements for increased heat transfer. In this paper, the effects of design parameters and the optimum design parameters for a Pin-Fin heat sink (PFHS) under multi-jet impingement case with thermal performance characteristics have been investigated by using Taguchi methodology based on the L9 orthogonal arrays. Various design parameters, such as pin-fin array size, gap between nozzle exit to impingement target surface (Z/d) and air velocity are explored by numerical experiment. The average convective heat transfer coefficient is considered as the thermal performance characteristics. The analysis of variance (ANOVA) is applied to find the effect of each design parameter on the thermal performance characteristics. Then the results of confirmation test with the optimal level constitution of design parameters have obviously shown that this logic approach can effective in optimizing the PFHS with the thermal performance characteristics. The analysis of the Taguchi method reveals that, all the parameters mentioned above have equal contributions in the performance of heat sink efficiency. Experimental results are provided to validate the suitability of the proposed approach.Keywords: Pin Fin Heat Sink (PFHS), Taguchi method, CFD, thermal performance
Procedia PDF Downloads 249