Search results for: analytical tools
5297 Towards the Modeling of Lost Core Viability in High-Pressure Die Casting: A Fluid-Structure Interaction Model with 2-Phase Flow Fluid Model
Authors: Sebastian Kohlstädt, Michael Vynnycky, Stephan Goeke, Jan Jäckel, Andreas Gebauer-Teichmann
Abstract:
This paper summarizes the progress in the latest computational fluid dynamics research towards the modeling in of lost core viability in high-pressure die casting. High-pressure die casting is a process that is widely employed in the automotive and neighboring industries due to its advantages in casting quality and cost efficiency. The degrees of freedom are however somewhat limited as it has been so far difficult to use lost cores in the process. This is right now changing and the deployment of lost cores is considered a future growth potential for high-pressure die casting companies. The use of this technology itself is difficult though. The strength of the core material, as chiefly salt is used, is limited and experiments have shown that the cores will not hold under all circumstances and process designs. For this purpose, the publicly available CFD library foam-extend (OpenFOAM) is used, and two additional fluid models for incompressible and compressible two-phase flow are implemented as fluid solver models into the FSI library. For this purpose, the volume-of-fluid (VOF) methodology is used. The necessity for the fluid-structure interaction (FSI) approach is shown by a simple CFD model geometry. The model is benchmarked against analytical models and experimental data. Sufficient agreement is found with the analytical models and good agreement with the experimental data. An outlook on future developments concludes the paper.Keywords: CFD, fluid-structure interaction, high-pressure die casting, multiphase flow
Procedia PDF Downloads 3325296 Analysis of a Faience Enema Found in the Assasif Tomb No. -28- of the Vizier Amenhotep Huy: Contributions to the Study of the Mummification Ritual Practiced in the Theban Necropolis
Authors: Alberto Abello Moreno-Cid
Abstract:
Mummification was the process through which immortality was granted to the deceased, so it was of extreme importance to the Egyptians. The techniques of embalming had evolved over the centuries, and specialists created increasingly sophisticated tools. However, due to its eminently religious nature, knowledge about everything related to this practice was jealously preserved, and the testimonies that have survived to our time are scarce. For this reason, embalming instruments found in archaeological excavations are uncommon. The tomb of the Vizier Amenhotep Huy (AT No. -28-), located in the el-Assasif necropolis that is being excavated since 2009 by the team of the Institute of Ancient Egyptian Studies, has been the scene of some discoveries of this type that evidences the existence of mummification practices in this place after the New Kingdom. The clysters or enemas are the fundamental tools in the second type of mummification described by the historian Herodotus to introduce caustic solutions inside the body of the deceased. Nevertheless, such objects only have been found in three locations: the tomb of Ankh-Hor in Luxor, where a copper enema belonged to the prophet of Ammon Uah-ib-Ra came to light; the excavation of the tomb of Menekh-ib-Nekau in Abusir, where was also found one made of copper; and the excavations in the Bucheum, where two more artifacts were discovered, also made of copper but in different shapes and sizes. Both of them were used for the mummification of sacred animals and this is the reason they vary significantly. Therefore, the object found in the tomb No. -28-, is the first known made of faience of all these peculiar tools and the oldest known until now, dated in the Third Intermediate Period (circa 1070-650 B.C.). This paper bases its investigation on the study of those parallelisms, the material, the current archaeological context and the full analysis and reconstruction of the object in question. The key point is the use of faience in the production of this item: creating a device intended to be in constant use seems to be a first illogical compared to other samples made of copper. Faience around the area of Deir el-Bahari had a strong religious component, associated with solar myths and principles of the resurrection, connected to the Osirian that characterises the mummification procedure. The study allows to refute some of the premises which are held unalterable in Egyptology, verifying the utilization of these sort of pieces, understanding its way of use and showing that this type of mummification was also applied to the highest social stratum, in which case the tools were thought out of an exceptional quality and religious symbolism.Keywords: clyster, el-Assasif, embalming, faience enema mummification, Theban necropolis
Procedia PDF Downloads 1105295 AI-Assisted Business Chinese Writing: Comparing the Textual Performances Between Independent Writing and Collaborative Writing
Authors: Stephanie Liu Lu
Abstract:
With the proliferation of artificial intelligence tools in the field of education, it is crucial to explore their impact on language learning outcomes. This paper examines the use of AI tools, such as ChatGPT, in practical writing within business Chinese teaching to investigate how AI can enhance practical writing skills and teaching effectiveness. The study involved third and fourth-year university students majoring in accounting and finance from a university in Hong Kong within the context of a business correspondence writing class. Students were randomly assigned to a control group, who completed business letter writing independently, and an experimental group, who completed the writing with the assistance of AI. In the latter, the AI-assisted business letters were initially drafted by the students issuing commands and interacting with the AI tool, followed by the students' revisions of the draft. The paper assesses the performance of both groups in terms of grammatical expression, communicative effect, and situational awareness. Additionally, the study collected dialogue texts from interactions between students and the AI tool to explore factors that affect text generation and the potential impact of AI on enhancing students' communicative and identity awareness. By collecting and comparing textual performances, it was found that students assisted by AI showed better situational awareness, as well as more skilled organization and grammar. However, the research also revealed that AI-generated articles frequently lacked a proper balance of identity and writing purpose due to limitations in students' communicative awareness and expression during the instruction and interaction process. Furthermore, the revision of drafts also tested the students' linguistic foundation, logical thinking abilities, and practical workplace experience. Therefore, integrating AI tools and related teaching into the curriculum is key to the future of business Chinese teaching.Keywords: AI-assistance, business Chinese, textual analysis, language education
Procedia PDF Downloads 575294 Theoretical Modeling of Self-Healing Polymers Crosslinked by Dynamic Bonds
Authors: Qiming Wang
Abstract:
Dynamic polymer networks (DPNs) crosslinked by dynamic bonds have received intensive attention because of their special crack-healing capability. Diverse DPNs have been synthesized using a number of dynamic bonds, including dynamic covalent bond, hydrogen bond, ionic bond, metal-ligand coordination, hydrophobic interaction, and others. Despite the promising success in the polymer synthesis, the fundamental understanding of their self-healing mechanics is still at the very beginning. Especially, a general analytical model to understand the interfacial self-healing behaviors of DPNs has not been established. Here, we develop polymer-network based analytical theories that can mechanistically model the constitutive behaviors and interfacial self-healing behaviors of DPNs. We consider that the DPN is composed of interpenetrating networks crosslinked by dynamic bonds. bonds obey a force-dependent chemical kinetics. During the self-healing process, we consider the The network chains follow inhomogeneous chain-length distributions and the dynamic polymer chains diffuse across the interface to reform the dynamic bonds, being modeled by a diffusion-reaction theory. The theories can predict the stress-stretch behaviors of original and self-healed DPNs, as well as the healing strength in a function of healing time. We show that the theoretically predicted healing behaviors can consistently match the documented experimental results of DPNs with various dynamic bonds, including dynamic covalent bonds (diarylbibenzofuranone and olefin metathesis), hydrogen bonds, and ionic bonds. We expect our model to be a powerful tool for the self-healing community to invent, design, understand, and optimize self-healing DPNs with various dynamic bonds.Keywords: self-healing polymers, dynamic covalent bonds, hydrogen bonds, ionic bonds
Procedia PDF Downloads 1875293 Development of a Standardization Methodology Assessing the Comfort Performance for Hanok
Authors: Mi-Hyang Lee, Seung-Hoon Han
Abstract:
Korean traditional residences have been built with deep design issues for various values such as social, cultural, and environmental influences to be started from a few thousand years ago, but its meaning is being vanished due to the different lifestyles these days. It is necessary, therefore, to grasp the meaning of the Korea traditional building called Hanok and to get Korean people understand its real advantages. The purpose of this study is to propose a standardization methodology for evaluating comfort features towards Korean traditional houses. This paper is also trying to build an official standard evaluation system and to integrate aesthetic and psychological values induced from Hanok. Its comfort performance values could be divided into two large categories that are physical and psychological, and fourteen methods have been defined as the Korean Standards (KS). For this research, field survey data from representative Hanok types were collected for each method. This study also contains a qualitative in-depth analysis of the Hanok comfort index by the professions using AHP (Analytical Hierarchy Process) and has examined the effect of the methods. As a result, this paper could define what methods can provide trustful outcomes and how to evaluate the own strengths in aspects of spatial comfort of Hanok using suggested procedures towards the spatial configuration of the traditional dwellings. This study has finally proposed an integrated development of a standardization methodology assessing the comfort performance for Korean traditional residences, and it is expected that they could evaluate inhabitants of the residents and interior environmental conditions especially structured by wood materials like Hanok.Keywords: Hanok, comfort performance, human condition, analytical hierarchy process
Procedia PDF Downloads 1575292 Normalized P-Laplacian: From Stochastic Game to Image Processing
Authors: Abderrahim Elmoataz
Abstract:
More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems
Procedia PDF Downloads 5125291 D6tions: A Serious Game to Learn Software Engineering Process and Design
Authors: Hector G. Perez-Gonzalez, Miriam Vazquez-Escalante, Sandra E. Nava-Muñoz, Francisco E. Martinez-Perez, Alberto S. Nunez-Varela
Abstract:
The software engineering teaching process has been the subject of many studies. To improve this process, researchers have proposed merely illustrative techniques in the classroom, such as topic presentations and dynamics between students on one side or attempts to involve students in real projects with companies and institutions to bring them to a real software development problem on the other hand. Simulators and serious games have been used as auxiliary tools to introduce students to topics that are too abstract when these are presented in the traditional way. Most of these tools cover a limited area of the huge software engineering scope. To address this problem, we have developed D6tions, an educational serious game that simulates the software engineering process and is designed to experiment the different stages a software engineer (playing roles as project leader or as a developer or designer) goes through, while participating in a software project. We describe previous approaches to this problem, how D6tions was designed, its rules, directions, and the results we obtained of the use of this game involving undergraduate students playing the game.Keywords: serious games, software engineering, software engineering education, software engineering teaching process
Procedia PDF Downloads 4935290 Optimization of Lean Methodologies in the Textile Industry Using Design of Experiments
Authors: Ahmad Yame, Ahad Ali, Badih Jawad, Daw Al-Werfalli Mohamed Nasser, Sabah Abro
Abstract:
Industries in general have a lot of waste. Wool textile company, Baniwalid, Libya has many complex problems that led to enormous waste generated due to the lack of lean strategies, expertise, technical support and commitment. To successfully address waste at wool textile company, this study will attempt to develop a methodical approach that integrates lean manufacturing tools to optimize performance characteristics such as lead time and delivery. This methodology will utilize Value Stream Mapping (VSM) techniques to identify the process variables that affect production. Once these variables are identified, Design of Experiments (DOE) Methodology will be used to determine the significantly influential process variables, these variables are then controlled and set at their optimal to achieve optimal levels of productivity, quality, agility, efficiency and delivery to analyze the outputs of the simulation model for different lean configurations. The goal of this research is to investigate how the tools of lean manufacturing can be adapted from the discrete to the continuous manufacturing environment and to evaluate their benefits at a specific industrial.Keywords: lean manufacturing, DOE, value stream mapping, textiles
Procedia PDF Downloads 4555289 A Framework for Blockchain Vulnerability Detection and Cybersecurity Education
Authors: Hongmei Chi
Abstract:
The Blockchain has become a necessity for many different societal industries and ordinary lives including cryptocurrency technology, supply chain, health care, public safety, education, etc. Therefore, training our future blockchain developers to know blockchain programming vulnerability and I.T. students' cyber security is in high demand. In this work, we propose a framework including learning modules and hands-on labs to guide future I.T. professionals towards developing secure blockchain programming habits and mitigating source code vulnerabilities at the early stages of the software development lifecycle following the concept of Secure Software Development Life Cycle (SSDLC). In this research, our goal is to make blockchain programmers and I.T. students aware of the vulnerabilities of blockchains. In summary, we develop a framework that will (1) improve students' skills and awareness of blockchain source code vulnerabilities, detection tools, and mitigation techniques (2) integrate concepts of blockchain vulnerabilities for IT students, (3) improve future IT workers’ ability to master the concepts of blockchain attacks.Keywords: software vulnerability detection, hands-on lab, static analysis tools, vulnerabilities, blockchain, active learning
Procedia PDF Downloads 995288 Three-Dimensional Finite Element Analysis of Geogrid-Reinforced Piled Embankments on Soft Clay
Authors: Mahmoud Y. Shokry, Rami M. El-Sherbiny
Abstract:
This paper aims to highlight the role of some parameters that may be of a noticeable impact on numerical analysis/design of embankments. It presents the results of a three-dimensional (3-D) finite element analysis of a monitored earth embankment that was constructed on soft clay formation stabilized by cast in-situ piles using software PLAXIS 3D. A comparison between the predicted and the monitored responses is presented to assess the adequacy of the adopted numerical model. The model was used in the targeted parametric study. Moreover, a comparison was performed between the results of the 3-D analyses and the analytical solutions. This paper concluded that the effect of using mono pile caps led to decrease both the total and differential settlement and increased the efficiency of the piled embankment system. The study of using geogrids revealed that it can contribute in decreasing the settlement and maximizing the part of the embankment load transferred to piles. Moreover, it was found that increasing the stiffness of the geogrids provides higher values of tensile forces and hence has more effective influence on embankment load carried by piles rather than using multi-number of layers with low values of geogrid stiffness. The efficiency of the piled embankments system was also found to be greater when higher embankments are used rather than the low height embankments. The comparison between the numerical 3-D model and the theoretical design methods revealed that many analytical solutions are conservative and non-accurate rather than the 3-D finite element numerical models.Keywords: efficiency, embankment, geogrids, soft clay
Procedia PDF Downloads 3235287 The Effects of Information Technology in Urban Health
Authors: Safdari Reza, Zahmatkeshan Maryam, Goli Arji
Abstract:
Background and Aim: Urban health is one of the challenges of the 21st century. Rapid growth and expanding urbanization have implications for health. In this regard, information technology can remove a large number of modern cities’ problems. Therefore, the present article aims to study modern information technologies in the development of urban health. Materials and Methods:. This is a review article based on library research and Internet searches on valid websites such as Science Direct, Magiran, Springer and advanced searches in Google. Some 164 domestic and foreign texts were studied on such topics as the application of ICT tools including cell phones and wireless tools, GIS, and RFID in the field of urban health in 2011. Finally, 30 sources were used. Conclusion: Information and communication technologies play an important role in improving people's health and enhancing the quality of their lives. Effective utilization of information and communication technologies requires the identification of opportunities and constraints, and the formulation of appropriate planning principles with regard to social and economic factors together with preparing the technological, communication and telecommunications, legal and administrative infrastructures.Keywords: Urban Health, Information Technology, Information & Communication, Technology
Procedia PDF Downloads 4635286 Quality Tools for Shaping Quality of Learning and Teaching in Education and Training
Authors: Renga Rao Krishnamoorthy, Raihan Tahir
Abstract:
The quality of classroom learning and teaching delivery has been and will continue to be debated at various levels worldwide. The regional cooperation programme to improve the quality and labour market orientation of the Technical and Vocational Education and Training (RECOTVET), ‘Deutsche Gesellschaft für Internationale Zusammenarbeit’ (GIZ), in line with the sustainable development goals (SDG), has taken the initiative in the development of quality TVET in the ASEAN region by developing the Quality Toolbox for Better TVET Delivery (Quality Toolbox). This initiative aims to provide quick and practical materials to trainers, instructors, and personnel involved in education and training at an institute to shape the quality of classroom learning and teaching. The Quality Toolbox for Better TVET Delivery was developed in three stages: literature review and development, validation, and finalization. Thematic areas in the Quality Toolbox were derived from collective input of concerns and challenges raised from experts’ workshops through moderated sessions involving representatives of TVET institutes from 9 ASEAN Member States (AMS). The sessions were facilitated by professional moderators and international experts. TVET practitioners representing AMS further analysed and discussed the structure of the Quality Toolbox and content of thematic areas and outlined a set of specific requirements and recommendations. The application exercise of the Quality Toolbox was carried out by TVET institutes among ASM. Experience sharing sessions from participating ASEAN countries were conducted virtually. The findings revealed that TVET institutes use two types of approaches in shaping the quality of learning and teaching, which is ascribed to inductive or deductive, shaping of quality in learning and teaching is a non-linear process and finally, Q-tools can be adopted and adapted to shape the quality of learning and teaching at TVET institutes in the following: improvement of the institutional quality, improvement of teaching quality and improvement on the organisation of learning and teaching for students and trainers. The Quality Toolbox has good potential to be used at education and training institutes to shape quality in learning and teaching.Keywords: AMS, GIZ, RECOTVET, quality tools
Procedia PDF Downloads 1295285 '3D City Model' through Quantum Geographic Information System: A Case Study of Gujarat International Finance Tec-City, Gujarat, India
Authors: Rahul Jain, Pradhir Parmar, Dhruvesh Patel
Abstract:
Planning and drawing are the important aspects of civil engineering. For testing theories about spatial location and interaction between land uses and related activities the computer based solution of urban models are used. The planner’s primary interest is in creation of 3D models of building and to obtain the terrain surface so that he can do urban morphological mappings, virtual reality, disaster management, fly through generation, visualization etc. 3D city models have a variety of applications in urban studies. Gujarat International Finance Tec-City (GIFT) is an ongoing construction site between Ahmedabad and Gandhinagar, Gujarat, India. It will be built on 3590000 m2 having a geographical coordinates of North Latitude 23°9’5’’N to 23°10’55’’ and East Longitude 72°42’2’’E to 72°42’16’’E. Therefore to develop 3D city models of GIFT city, the base map of the city is collected from GIFT office. Differential Geographical Positioning System (DGPS) is used to collect the Ground Control Points (GCP) from the field. The GCP points are used for the registration of base map in QGIS. The registered map is projected in WGS 84/UTM zone 43N grid and digitized with the help of various shapefile tools in QGIS. The approximate height of the buildings that are going to build is collected from the GIFT office and placed on the attribute table of each layer created using shapefile tools. The Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global (30 m X 30 m) grid data is used to generate the terrain of GIFT city. The Google Satellite Map is used to place on the background to get the exact location of the GIFT city. Various plugins and tools in QGIS are used to convert the raster layer of the base map of GIFT city into 3D model. The fly through tool is used for capturing and viewing the entire area in 3D of the city. This paper discusses all techniques and their usefulness in 3D city model creation from the GCP, base map, SRTM and QGIS.Keywords: 3D model, DGPS, GIFT City, QGIS, SRTM
Procedia PDF Downloads 2465284 Towards the Effectiveness/ Performance of Spatial Communication within the Composite Interior Spaces: Wayfinding System in the Saudi National Museum as a Case Study
Authors: Afnan T. Bagasi, Donia M. Bettaieb, Abeer Alsobahi
Abstract:
The wayfinding system is related to the course of the museum journey for visitors directly and indirectly. The design aspects of this system play an important role, making it an effective and communication system within the museum space. However, translating the concepts that pertain to its design, such as Intelligibility that is based on integration and connectivity in museum space design, needs more customization in the form of specific design considerations with reference to the most important approaches. Those approaches link the organizational and practical aspects to the semiotic and semantic aspects related to the space syntax by targeting the visual and perceived consistency of visitors. In this context, the study aims to identify how to apply the concept of intelligibility and clarity by employing integration and connectivity to design a wayfinding system in museums as a kind of composite interior space. Using the available plans and images to extrapolate the design considerations used to design the wayfinding system in the Saudi National Museum as a case study, a descriptive-analytical method was used to understand the basic organizational and morphological principles of the museum space through four main aspects in space design: morphological, semantic, semiotic, and pragmatic. The study's findings will assist designers, professionals, and researchers in the field of museum design in understanding the significance of the wayfinding system by delving into it through museum spaces by highlighting the essential aspects using a clear analytical method.Keywords: wayfinding system, museum journey, intelligibility, integration, connectivity
Procedia PDF Downloads 1715283 Interests and Perspectives of a Psychosocial Rehabilitation Diagnosis : A Useful Tool in the Evaluation About the Potentials of Long-Term Institutionalized Chronic Patients
Authors: I. Dumand, C. Clesse, M. Decker, C. Savini, J. Lighezzolo-Alnot
Abstract:
In the landscape of French psychiatry, long-term institutionalization of patients with severe and disabling chronics disorders is common. Faced with the failures of classical reinsertion, sometimes these users are hurriedly considered as 'insortables'. However, this representation is often swayed by the current behavior of the patient observed through the clinical observation. Unfortunately, it seems that this way of proceeding can not integrate the potentialities of the institutionalized patients and their possible evolution. Therefore, in order not to make hasty conclusions about the life perspectives of these individuals, it seems essential to associate with clinical observation a psycho social rehabilitation diagnosis. Multidisciplinary, it combine all the aspects that make up the life of the subject (the life aspirations, psycho social determinants, family support, cognitive potential, symptoms ...). In this paper, we will rank these different aspects necessary prerequisites to the realization of a psycho social rehabilitation diagnosis. Then, we will specifically speak of the issue of psychological evaluation. By adopting an integrative approach combining neuro psychological tools (Grober and Buschke, Stroop, WCST, AIPSS, WAIS, Eyes test ...) and projective tools interpreted under a psycho dynamic angle (Rorschach, TAT ..) we think that we can grasp the patient in his globality. Thus, during this process we will justify the interest of combining a cognitive and a psycho affective approach, we will identify the different items assessed and their future implications on the everyday life of the users. Finally, we show that this diagnosis can give a chance to reintegration to 30% of patients considered as ''insortables''. In conclusion, we will highlight the importance of this process dear to the community psychology emphasizing in the same time the interests of this approach in terms of empowerment, recovery and quality of life.Keywords: assessment, potentiality, psychosocial rehabilitation diagnosis, tools
Procedia PDF Downloads 3725282 An Analytical Systematic Design Approach to Evaluate Ballistic Performance of Armour Grade AA7075 Aluminium Alloy Using Friction Stir Processing
Authors: Lahari Ramya Pa, Sudhakar Ib, Madhu Vc, Madhusudhan Reddy Gd, Srinivasa Rao E.
Abstract:
Selection of suitable armor materials for defense applications is very crucial with respect to increasing mobility of the systems as well as maintaining safety. Therefore, determining the material with the lowest possible areal density that resists the predefined threat successfully is required in armor design studies. A number of light metal and alloys are come in to forefront especially to substitute the armour grade steels. AA5083 aluminium alloy which fit in to the military standards imposed by USA army is foremost nonferrous alloy to consider for possible replacement of steel to increase the mobility of armour vehicles and enhance fuel economy. Growing need of AA5083 aluminium alloy paves a way to develop supplement aluminium alloys maintaining the military standards. It has been witnessed that AA 2xxx aluminium alloy, AA6xxx aluminium alloy and AA7xxx aluminium alloy are the potential material to supplement AA5083 aluminium alloy. Among those cited aluminium series alloys AA7xxx aluminium alloy (heat treatable) possesses high strength and can compete with armour grade steels. Earlier investigations revealed that layering of AA7xxx aluminium alloy can prevent spalling of rear portion of armour during ballistic impacts. Hence, present investigation deals with fabrication of hard layer (made of boron carbide) i.e. layer on AA 7075 aluminium alloy using friction stir processing with an intention of blunting the projectile in the initial impact and backing tough portion(AA7xxx aluminium alloy) to dissipate residual kinetic energy. An analytical approach has been adopted to unfold the ballistic performance of projectile. Penetration of projectile inside the armour has been resolved by considering by strain energy model analysis. Perforation shearing areas i.e. interface of projectile and armour is taken in to account for evaluation of penetration inside the armour. Fabricated surface composites (targets) were tested as per the military standard (JIS.0108.01) in a ballistic testing tunnel at Defence Metallurgical Research Laboratory (DMRL), Hyderabad in standardized testing conditions. Analytical results were well validated with experimental obtained one.Keywords: AA7075 aluminium alloy, friction stir processing, boron carbide, ballistic performance, target
Procedia PDF Downloads 3295281 Arabic Light Word Analyser: Roles with Deep Learning Approach
Authors: Mohammed Abu Shquier
Abstract:
This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN
Procedia PDF Downloads 425280 A Study on the Personality Traits of Students Who Have Chosen Medicine as Their Career
Authors: Khairani Omar, Shalinawati Ramli, Nurul Azmawati Mohamed, Zarini Ismail, Nur Syahrina Rahim, Nurul Hayati Chamhuri
Abstract:
Choosing a career which matches a student’s personality traits is one of the key factors for future work satisfaction. This is because career satisfaction is at the highest when it is in line with one’s personality strength, values and attitudes. Personality traits play a major role in determining the success of a student in the medical course. In the pre-clinical years, medical theories are being emphasized, thus, conscientious students would perform better than those with lower level of this trait. As the emphasis changes in the clinical years during which patient interaction is important, personality traits which involved interpersonal values become more essential for success. The aim of this study was to determine the personality traits of students who had chosen medicine as their career. It was a cross-sectional study conducted at the Islamic Science University of Malaysia. The respondents consisted of 81 students whose age ranged between 20-21 years old. A set of personality assessment inventory index which has been validated for the local context was used to determine the students’ personality traits. The instrument assessed 15 personality traits namely: aggressive, analytical, autonomy, creativity, extrovert, intellectual, motivation, diversity, resiliency, self-criticism, control, helpful, support, structured and achievement. The scores ranged between 1-100%, and they were categorized into low (1-30%), moderate (40-60%) and high scores (70-100%). The respondents were Year 3 pre-clinical medical students and there were more female students (69%) compared to male students (31%). Majority of them were from middle-income families. Approximately 70% of both parents of the respondents had tertiary education. Majority of the students had high scores in autonomy, creativity, diversity, helpful, structured and achievement. In other words, more than 50% of them scored high (70-100%) in these traits. Scoring high in these traits was beneficial for the medical course. For aggressive trait, 54% of them had moderate scores which is compatible for medicine as this indicated an inclination to being assertive. In the analytical and intellectual components, only 40% and 25% had high scores respectively. These results contradicted the usual expectation of medical students whereby they are expected to be highly analytical and intellectual. It would be an added value if the students had high scores in being extrovert as this reflects on good interpersonal values, however, the students had approximately similar scores in all categories of this trait. Being resilient in the medical school is important as the course is difficult and demanding. The students had good scores in this component in which 46% had high scores while 39% had moderate scores. In conclusion, by understanding their personality traits, strengths and weaknesses, the students will have an opportunity to improve themselves in the areas they lack. This will help them to become better doctors in future.Keywords: career, medical students, medicine, personality traits
Procedia PDF Downloads 2965279 Competition Law as a “Must Have” Course in Legal Education
Authors: Noemia Bessa Vilela, Jose Caramelo Gomes
Abstract:
All law student are familiarized, in the first years of their bachelor of laws with the concepts of “public goods” and “ private goods”; often, such legal concept does not exactly match such economic concept, and there are consequences are some sort of confusion being created. The list of goods that follow under each category is not exhaustive, nor are students given proper mechanisms to acknowledge that some legal fields can, on its own, be considered as a “public good”; this is the case of Competition. Legal authors consider that “competition law is used to promote public interest” and, as such, it is a “public good”; in economics theory, Competition is the first public good in a market economy, as the enabler of allocation efficiency. Competition law is the legal tool to support the proper functioning of the market economy and democracy itself. It is fact that Competition Law only applies to economic activities, still, competition is object of private litigation as an integral part of Public Law. Still, regardless of the importance of Competition Law in the economic activity and market regulation, most student complete their studies in law, join the Bar Associations and engage in their professional activities never having been given sufficient tools to deal with the increasing demands of a globalized world. The lack of knowledge of economics, market functioning and the mechanisms at their reach in order to ensure proper realization of their duties as lawyers/ attorneys-at-law would be tackled if Competition Law would be included as part of the curricula of Law Schools. Proper teaching of Competition Law would combine the foundations of Competition Law, doctrine, case solving and Case Law study. Students should to understand and apply the analytical model. Special emphasis should be given to EU Competition Law, namely the TFEU Articles 101 to 106. Damages Directive should also be part of the curriculum. Students must in the first place acquire and master the economic rationale as competition and the world of competition law are the cornerstone of sound and efficient market. The teaching of Competition Law in undergraduate programs in Law would contribute to fulfill the potential of the students who will deal with matters related to consumer protection, economic and commercial law issues both in private practice and as in-house lawyers for companies.Keywords: higher education, competition law, legal education, law, market economy, industrial economics
Procedia PDF Downloads 1425278 Oxidation Assessment of Mayonnaise with Headspace Single-Drop Microextarction (HS-SDME) Coupled with Gas Chromatography-Mass Spectrometry (GC-MS) during Shelf-Life
Authors: Kooshan Nayebzadeh, Maryam Enteshari, Abdorreza Mohammadi
Abstract:
The oxidative stability of mayonnaise under different storage temperatures (4 and 25˚C) during 6-month shelf-life was investigated by different analytical methods. In this study, headspace single-drop microextarction (HS-SDME) combined with gas chromatography-mass spectrometry (GC-MS) as a green, sensitive and rapid technique was applied to evaluate oxidative state in mayonnaise. Oxidation changes of extracted oil from mayonnaise were monitored by analytical parameters including peroxide value (PV), p-Anisidine value (p-An V), thiobarbituric acid value (TBA), and oxidative stability index (OSI). Hexanal and heptanal as secondary volatile oxidation compounds were determined by HS-SDME/GC-MS method in mayonnaise matrix. The rate of oxidation in mayonnaises increased during storage and it was determined greater at 25 ˚C. The values of Anisidine and TBA were gradually enhanced during 6 months, while the amount of OSI decreased. At both temperatures, the content of hexanal was higher than heptanal during all storage periods. Also significant increments in hexanal and heptanal concentrations in the second and sixth month of storage have been observed. Hexanal concentrations in mayonnaises which were stored at 25 ˚C and during storage time showed the highest values. It can be concluded that the temperature and duration of storage time are definitive parameters which affect on quality and oxidative stability of mayonnaise. Additionally, hexanal content in comparison to heptanal is a more reliable oxidative indicator and HS-SDME/GC-MS can be applied in a quick and simple manner.Keywords: oxidative stability, mayonnaise, headspace single-drop microextarction (HS-SDME), shelf-life
Procedia PDF Downloads 4195277 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies
Authors: Chao-Ton Su, Li-Fei Chen
Abstract:
The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design
Procedia PDF Downloads 1455276 Assessment of Frying Material by Deep-Fat Frying Method
Authors: Brinda Sharma, Saakshi S. Sarpotdar
Abstract:
Deep-fat frying is popular standard method that has been studied basically to clarify the complicated mechanisms of fat decomposition at high temperatures and to assess their effects on human health. The aim of this paper is to point out the application of method engineering that has been recently improved our understanding of the fundamental principles and mechanisms concerned at different scales and different times throughout the process: pretreatment, frying, and cooling. It covers the several aspects of deep-fat drying. New results regarding the understanding of the frying method that are obtained as a results of major breakthroughs in on-line instrumentation (heat, steam flux, and native pressure sensors), within the methodology of microstructural and imaging analysis (NMR, MRI, SEM) and in software system tools for the simulation of coupled transfer and transport phenomena. Such advances have opened the approach for the creation of significant information of the behavior of varied materials and to the event of latest tools to manage frying operations via final product quality in real conditions. Lastly, this paper promotes an integrated approach to the frying method as well as numerous competencies like those of chemists, engineers, toxicologists, nutritionists, and materials scientists also as of the occupation and industrial sectors.Keywords: frying, cooling, imaging analysis (NMR, MRI, SEM), deep-fat frying
Procedia PDF Downloads 4305275 Influences of Separation of the Boundary Layer in the Reservoir Pressure in the Shock Tube
Authors: Bruno Coelho Lima, Joao F.A. Martos, Paulo G. P. Toro, Israel S. Rego
Abstract:
The shock tube is a ground-facility widely used in aerospace and aeronautics science and technology for studies on gas dynamic and chemical-physical processes in gases at high-temperature, explosions and dynamic calibration of pressure sensors. A shock tube in its simplest form is comprised of two separate tubes of equal cross-section by a diaphragm. The diaphragm function is to separate the two reservoirs at different pressures. The reservoir containing high pressure is called the Driver, the low pressure reservoir is called Driven. When the diaphragm is broken by pressure difference, a normal shock wave and non-stationary (named Incident Shock Wave) will be formed in the same place of diaphragm and will get around toward the closed end of Driven. When this shock wave reaches the closer end of the Driven section will be completely reflected. Now, the shock wave will interact with the boundary layer that was created by the induced flow by incident shock wave passage. The interaction between boundary layer and shock wave force the separation of the boundary layer. The aim of this paper is to make an analysis of influences of separation of the boundary layer in the reservoir pressure in the shock tube. A comparison among CDF (Computational Fluids Dynamics), experiments test and analytical analysis were performed. For the analytical analysis, some routines in Python was created, in the numerical simulations (Computational Fluids Dynamics) was used the Ansys Fluent, and the experimental tests were used T1 shock tube located in IEAv (Institute of Advanced Studies).Keywords: boundary layer separation, moving shock wave, shock tube, transient simulation
Procedia PDF Downloads 3155274 Analytic Hierarchy Process and Multi-Criteria Decision-Making Approach for Selecting the Most Effective Soil Erosion Zone in Gomati River Basin
Authors: Rajesh Chakraborty, Dibyendu Das, Rabindra Nath Barman, Uttam Kumar Mandal
Abstract:
In the present study, the objective is to find out the most effective zone causing soil erosion in the Gumati river basin located in the state of Tripura, a north eastern state of India using analytical hierarchy process (AHP) and multi-objective optimization on the basis of ratio analysis (MOORA).The watershed is segmented into 20 zones based on Area. The watershed is considered by pointing the maximum elevation from sea lever from Google earth. The soil erosion is determined using the universal soil loss equation. The different independent variables of soil loss equation bear different weightage for different soil zones. And therefore, to find the weightage factor for all the variables of soil loss equation like rainfall runoff erosivity index, soil erodibility factor etc, analytical hierarchy process (AHP) is used. And thereafter, multi-objective optimization on the basis of ratio analysis (MOORA) approach is used to select the most effective zone causing soil erosion. The MCDM technique concludes that the maximum soil erosion is occurring in the zone 14.Keywords: soil erosion, analytic hierarchy process (AHP), multi criteria decision making (MCDM), universal soil loss equation (USLE), multi-objective optimization on the basis of ratio analysis (MOORA)
Procedia PDF Downloads 5385273 Effect of Environmental Parameters on the Water Solubility of the Polycyclic Aromatic Hydrocarbons and Derivatives using Taguchi Experimental Design Methodology
Authors: Pranudda Pimsee, Caroline Sablayrolles, Pascale De Caro, Julien Guyomarch, Nicolas Lesage, Mireille Montréjaud-Vignoles
Abstract:
The MIGR’HYCAR research project was initiated to provide decisional tools for risks connected to oil spill drifts in continental waters. These tools aim to serve in the decision-making process once oil spill pollution occurs and/or as reference tools to study scenarios of potential impacts of pollutions on a given site. This paper focuses on the study of the distribution of polycyclic aromatic hydrocarbons (PAHs) and derivatives from oil spill in water as function of environmental parameters. Eight petroleum oils covering a representative range of commercially available products were tested. 41 Polycyclic Aromatic Hydrocarbons (PAHs) and derivate, among them 16 EPA priority pollutants were studied by dynamic tests at laboratory scale. The chemical profile of the water soluble fraction was different from the parent oil profile due to the various water solubility of oil components. Semi-volatile compounds (naphtalenes) constitute the major part of the water soluble fraction. A large variation in composition of the water soluble fraction was highlighted depending on oil type. Moreover, four environmental parameters (temperature, suspended solid quantity, salinity, and oil: water surface ratio) were investigated with the Taguchi experimental design methodology. The results showed that oils are divided into three groups: the solubility of Domestic fuel and Jet A1 presented a high sensitivity to parameters studied, meaning they must be taken into account. For gasoline (SP95-E10) and diesel fuel, a medium sensitivity to parameters was observed. In fact, the four others oils have shown low sensitivity to parameters studied. Finally, three parameters were found to be significant towards the water soluble fraction.Keywords: mornitoring, PAHs, water soluble fraction, SBSE, Taguchi experimental design
Procedia PDF Downloads 3255272 Investigating The Effect Of Convection On The Rating Of Buried Cables Using The Finite Element Method
Authors: Sandy J. M. Balla, Jerry J. Walker, Isaac K. Kyere
Abstract:
The heat transfer coefficient at the soil–air interface is important in calculating underground cable ampacity when convection occurs. Calculating the heat transfer coefficient accurately is complex because of the temperature variations at the earth's surface. This paper presents the effect of convection heat flow across the ground surface on the rating of three single-core, 132kV, XLPE cables buried underground. The Finite element method (FEM) is a numerical analysis technique used to determine the cable rating of buried cables under installation conditions that are difficult to support when using the analytical method. This study demonstrates the use of FEM to investigate the effect of convection on the rating ofburied cables in flat formation using QuickField finite element simulation software. As a result, developing a model to simulate this type of situation necessitates important considerations such as the following boundary conditions: burial depth, soil thermal resistivity, and soil temperature, which play an important role in the simulation's accuracy and reliability. The results show that when the ground surface is taken as a convection interface, the conductor temperature rises and may exceed the maximum permissible temperature when rated current flows. This is because the ground surface acts as a convection interface between the soil and the air (fluid). This result correlates and is compared with the rating obtained using the IEC60287 analytical method, which is based on the condition that the ground surface is an isotherm.Keywords: finite element method, convection, buried cables, steady-state rating
Procedia PDF Downloads 1315271 Ranking of Managerial Parameters Impacting upon Performance of Football Referees in Iran
Authors: Mohammad Reza Boromand, Masoud Moradi, Amin Eskandari
Abstract:
The present study attempts to determine ranking of managerial parameters impacting upon performance of football referees in Iran. The population consisted of all referees in Leagues 1, 2 and 3 as well as super league of Iran (N=273), of which we selected 160 referees and assistant referees in 2013-2014. A research-designed questionnaire was used for data collection which was divided into two sections: (1) Demographic details (age range, Marital status, employment, refereeing experience, education level, refereeing level and proficiency) and (2) items related to parameters impacting upon performance of referees (structural parameters, operational parameters, environmental parameters, temporal parameters, economic parameters, facilities and tools, personal performance and performance evaluation). Internal consistency was calculated by Cronbach's alpha (r=0.85). For data analysis, we performed Freedman's Test and used SPSS software (α>0.05), along with descriptive statistics. The findings showed the following ranking for the above-mentioned managerial parameters: Facilities and tools, personal performance, economic parameters, structural parameters, operational parameters, environmental parameters, temporal parameters, and performance evaluation.Keywords: Iran, football referees, managerial parameters, performance
Procedia PDF Downloads 5715270 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools
Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus
Abstract:
Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects
Procedia PDF Downloads 2655269 Developing Emission Factors of Fugitive Particulate Matter Emissions for Construction Sites in the Middle East Area
Authors: Hala A. Hassan, Vasiliki K. Tsiouri, Konstantinos E. Konstantinos
Abstract:
Fugitive particulate matter (PM) is a major source of airborne pollution in the Middle East countries. The meteorological conditions and topography of the area make it highly susceptible to wind-blown particles which raise many air quality concerns. Air quality tools such as field monitoring, emission factors, and dispersion modeling have been used in previous research studies to analyze the release and impacts of fugitive PM in the region. However, these tools have been originally developed based on experiments made for European and North American regions. In this work, an experimental campaign was conducted on April-May 2014 in a construction site in Doha city, Qatar. The ultimate goal is to evaluate the applicability of the existing emission factors for construction sites in dry and arid areas like the Middle East. This publication was made possible by a NPRP award [NPRP 7-649-2-241] from the Qatar National Research Fund (a member of The Qatar Foundation). The statements made herein are solely the responsibility of the authors.Keywords: particulate matter, emissions, fugitive, construction, air pollution
Procedia PDF Downloads 3515268 Consensus Reaching Process and False Consensus Effect in a Problem of Portfolio Selection
Authors: Viviana Ventre, Giacomo Di Tollo, Roberta Martino
Abstract:
The portfolio selection problem includes the evaluation of many criteria that are difficult to compare directly and is characterized by uncertain elements. The portfolio selection problem can be modeled as a group decision problem in which several experts are invited to present their assessment. In this context, it is important to study and analyze the process of reaching a consensus among group members. Indeed, due to the various diversities among experts, reaching consensus is not necessarily always simple and easily achievable. Moreover, the concept of consensus is accompanied by the concept of false consensus, which is particularly interesting in the dynamics of group decision-making processes. False consensus can alter the evaluation and selection phase of the alternative and is the consequence of the decision maker's inability to recognize that his preferences are conditioned by subjective structures. The present work aims to investigate the dynamics of consensus attainment in a group decision problem in which equivalent portfolios are proposed. In particular, the study aims to analyze the impact of the subjective structure of the decision-maker during the evaluation and selection phase of the alternatives. Therefore, the experimental framework is divided into three phases. In the first phase, experts are sent to evaluate the characteristics of all portfolios individually, without peer comparison, arriving independently at the selection of the preferred portfolio. The experts' evaluations are used to obtain individual Analytical Hierarchical Processes that define the weight that each expert gives to all criteria with respect to the proposed alternatives. This step provides insight into how the decision maker's decision process develops, step by step, from goal analysis to alternative selection. The second phase includes the description of the decision maker's state through Markov chains. In fact, the individual weights obtained in the first phase can be reviewed and described as transition weights from one state to another. Thus, with the construction of the individual transition matrices, the possible next state of the expert is determined from the individual weights at the end of the first phase. Finally, the experts meet, and the process of reaching consensus is analyzed by considering the single individual state obtained at the previous stage and the false consensus bias. The work contributes to the study of the impact of subjective structures, quantified through the Analytical Hierarchical Process, and how they combine with the false consensus bias in group decision-making dynamics and the consensus reaching process in problems involving the selection of equivalent portfolios.Keywords: analytical hierarchical process, consensus building, false consensus effect, markov chains, portfolio selection problem
Procedia PDF Downloads 93