Search results for: time domain analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 40018

Search results for: time domain analysis

38698 Monte Carlo and Biophysics Analysis in a Criminal Trial

Authors: Luca Indovina, Carmela Coppola, Carlo Altucci, Riccardo Barberi, Rocco Romano

Abstract:

In this paper a real court case, held in Italy at the Court of Nola, in which a correct physical description, conducted with both a Monte Carlo and biophysical analysis, would have been sufficient to arrive at conclusions confirmed by documentary evidence, is considered. This will be an example of how forensic physics can be useful in confirming documentary evidence in order to reach hardly questionable conclusions. This was a libel trial in which the defendant, Mr. DS (Defendant for Slander), had falsely accused one of his neighbors, Mr. OP (Offended Person), of having caused him some damages. The damages would have been caused by an external plaster piece that would have detached from the neighbor’s property and would have hit Mr DS while he was in his garden, much more than a meter far away from the facade of the building from which the plaster piece would have detached. In the trial, Mr. DS claimed to have suffered a scratch on his forehead, but he never showed the plaster that had hit him, nor was able to tell from where the plaster would have arrived. Furthermore, Mr. DS presented a medical certificate with a diagnosis of contusion of the cerebral cortex. On the contrary, the images of Mr. OP’s security cameras do not show any movement in the garden of Mr. DS in a long interval of time (about 2 hours) around the time of the alleged accident, nor do they show any people entering or coming out from the house of Mr. DS in the same interval of time. Biophysical analysis shows that both the diagnosis of the medical certificate and the wound declared by the defendant, already in conflict with each other, are not compatible with the fall of external plaster pieces too small to be found. The wind was at a level 1 of the Beaufort scale, that is, unable to raise even dust (level 4 of the Beaufort scale). Therefore, the motion of the plaster pieces can be described as a projectile motion, whereas collisions with the building cornice can be treated using Newtons law of coefficients of restitution. Numerous numerical Monte Carlo simulations show that the pieces of plaster would not have been able to reach even the garden of Mr. DS, let alone a distance over 1.30 meters. Results agree with the documentary evidence (images of Mr. OP’s security cameras) that Mr. DS could not have been hit by plaster pieces coming from Mr. OP’s property.

Keywords: biophysics analysis, Monte Carlo simulations, Newton’s law of restitution, projectile motion

Procedia PDF Downloads 116
38697 Methodology of Geometry Simplification for Conjugate Heat Transfer of Electrical Rotating Machines Using Computational Fluid Dynamics

Authors: Sachin Aggarwal, Sarah Kassinger, Nicholas Hoffman

Abstract:

Geometry simplification is a key step in performing conjugate heat transfer analysis using CFD. This paper proposes a standard methodology for the geometry simplification of rotating machines, such as electrical generators and electrical motors (both air and liquid-cooled). These machines are extensively deployed throughout the aerospace and automotive industries, where optimization of weight, volume, and performance is paramount -especially given the current global transition to renewable energy sources and vehicle hybridization and electrification. Conjugate heat transfer analysis is an essential step in optimizing their complex design. This methodology will help in reducing convergence issues due to poor mesh quality, thus decreasing computational cost and overall analysis time.

Keywords: CFD, electrical machines, Geometry simplification, heat transfer

Procedia PDF Downloads 110
38696 Development of a CFD Model for PCM Based Energy Storage in a Vertical Triplex Tube Heat Exchanger

Authors: Pratibha Biswal, Suyash Morchhale, Anshuman Singh Yadav, Shubham Sanjay Chobe

Abstract:

Energy demands are increasing whereas energy sources, especially non-renewable sources are limited. Due to the intermittent nature of renewable energy sources, it has become the need of the hour to find new ways to store energy. Out of various energy storage methods, latent heat thermal storage devices are becoming popular due to their high energy density per unit mass and volume at nearly constant temperature. This work presents a computational fluid dynamics (CFD) model using ANSYS FLUENT 19.0 for energy storage characteristics of a phase change material (PCM) filled in a vertical triplex tube thermal energy storage system. A vertical triplex tube heat exchanger, just like its name consists of three concentric tubes (pipe sections) for parting the device into three fluid domains. The PCM is filled in the middle domain with heat transfer fluids flowing in the outer and innermost domains. To enhance the heat transfer inside the PCM, eight fins have been incorporated between the internal and external tubes. These fins run radially outwards from the outer-wall of innermost tube to the inner-wall of the middle tube dividing the middle domain (between innermost and middle tube) into eight sections. These eight sections are then filled with a PCM. The validation is carried with earlier work and a grid independence test is also presented. Further studies on freezing and melting process were carried out. The results are presented in terms of pictorial representation of isotherms and liquid fraction

Keywords: heat exchanger, thermal energy storage, phase change material, CFD, latent heat

Procedia PDF Downloads 141
38695 Conceptual Knowledge Structure Updates after Instructor Provided Structural Feedback: An Exploratory Study Applied with Undergraduate Architectural Engineering Students

Authors: Roy B. Clariana, Ryan L. Solnosky

Abstract:

Structural feedback is any form of feedback that aims to improve the quality of students’ domain-normative conceptual interrelationships. Research with structural feedback points to the potential mediating role of network graphs as feedback for tuning students’ conceptual understanding; for example, improved content knowledge and motivation were observed for undergraduate students who accessed the instructor’s networks of course content. This exploratory study uses a one-group pretest-posttest design to examine the effects of instructor-provided network feedback during lectures on students’ knowledge structure measured using a concept sorting task at the pretest and posttest. Undergraduate students in an architectural engineering course (n = 32) completed a lesson module and then an end-of-unit quiz on building with wood and wood framing. Three weeks later, as a review, students completed a sorting task that used 26 terms from that lesson, then a week later, the sorting task data were used to create a group-average network, this network along with the instructor’s expert network were added to that week’s lecture slides and were compared and discussed during class time. A week later, students completed the sorting task again. The pre and post-sorting data were rendered into pathfinder networks, and then these students’ networks were compared to five referent networks, specifically the textbook chapter network, the lecture slides network, a network of the end-of-unit quiz, the actual expert network that served as the feedback intervention, and the group-average network. Inspection of means shows that knowledge structure measures improved for all five measures from pre-to-post, becoming more like the lesson content and like the expert. Repeated measures analysis with follow-up paired samples t-tests showed pre-to-post significant increases for both the end-of-unit quiz and the expert network referents. The findings show that instructor presentation of structural feedback as networks improved or ‘tuned’ students’ knowledge structure of the lesson content. This approach only takes a few extra minutes of class time and is fairly simple to implement in ordinary classrooms, and so it has wide potential to support classroom instruction and student learning. Further research is needed to determine how critical it is to present both the group-average network along with the expert network for comparison in order to highlight group-level misconceptions, or is presenting only the expert network sufficient? If a group-level network is necessary, then a simple clicker-like classroom tool could be developed to collect sorting task data during lectures that could then immediately provide the group-average network for class discussion and reflection.

Keywords: classroom instruction, engineering education, knowledge structure, pathfinder networks, structural feedback

Procedia PDF Downloads 52
38694 Starting Characteristic Analysis of LSPM for Pumping System Considering Demagnetization

Authors: Subrato Saha, Yun-Hyun Cho

Abstract:

This paper presents the design process of a high performance 3-phase 3.7 kW 2-pole line start permanent magnet synchronous motor for pumping system. A method was proposed to study the starting torque characteristics considering line start with high inertia load. A d-q model including cage was built to study the synchronization capability. Time-stepping finite element method analysis was utilized to accurately predict the dynamic and transient performance, efficiency, starting current, speed curve and, etc. Considering the load torque of pumps during starting stage, the rotor bar was designed with minimum demagnetization of permanent magnet caused by huge starting current.

Keywords: LSPM, starting analysis, demagnetization, FEA, pumping system

Procedia PDF Downloads 461
38693 Social-Cognitive Aspects of Interpretation: Didactic Approaches in Language Processing and English as a Second Language Difficulties in Dyslexia

Authors: Schnell Zsuzsanna

Abstract:

Background: The interpretation of written texts, language processing in the visual domain, in other words, atypical reading abilities, also known as dyslexia, is an ever-growing phenomenon in today’s societies and educational communities. The much-researched problem affects cognitive abilities and, coupled with normal intelligence normally manifests difficulties in the differentiation of sounds and orthography and in the holistic processing of written words. The factors of susceptibility are varied: social, cognitive psychological, and linguistic factors interact with each other. Methods: The research will explain the psycholinguistics of dyslexia on the basis of several empirical experiments and demonstrate how domain-general abilities of inhibition, retrieval from the mental lexicon, priming, phonological processing, and visual modality transfer affect successful language processing and interpretation. Interpretation of visual stimuli is hindered, and the problem seems to be embedded in a sociocultural, psycholinguistic, and cognitive background. This makes the picture even more complex, suggesting that the understanding and resolving of the issues of dyslexia has to be interdisciplinary, aided by several disciplines in the field of humanities and social sciences, and should be researched from an empirical approach, where the practical, educational corollaries can be analyzed on an applied basis. Aim and applicability: The lecture sheds light on the applied, cognitive aspects of interpretation, social cognitive traits of language processing, the mental underpinnings of cognitive interpretation strategies in different languages (namely, Hungarian and English), offering solutions with a few applied techniques for success in foreign language learning that can be useful advice for the developers of testing methodologies and measures across ESL teaching and testing platforms.

Keywords: dyslexia, social cognition, transparency, modalities

Procedia PDF Downloads 71
38692 Special Single Mode Fiber Tests of Polarization Mode Dispersion Changes in a Harsh Environment

Authors: Jan Bohata, Stanislav Zvanovec, Matej Komanec, Jakub Jaros, David Hruby

Abstract:

Even though there is a rapid development in new optical networks, still optical communication infrastructures remain composed of thousands of kilometers of aging optical cables. Many of them are located in a harsh environment which contributes to an increased attenuation or induced birefringence of the fibers leading to the increase of polarization mode dispersion (PMD). In this paper, we report experimental results from environmental optical cable tests and characterization in the climate chamber. We focused on the evaluation of optical network reliability in a harsh environment. For this purpose, a special thermal chamber was adopted, targeting to the large temperature changes between -60 °C and 160 C° with defined humidity. Single mode optical cable 230 meters long, having six tubes and a total number of 72 single mode optical fibers was spliced together forming one fiber link, which was afterward tested in the climate chamber. The main emphasis was put to the polarization mode dispersion (PMD) changes, which were evaluated by three different PMD measuring methods (general interferometry technique, scrambled state-of-polarization analysis and polarization optical time domain reflectometer) in order to fully validate obtained results. Moreover, attenuation and chromatic dispersion (CD), as well as the PMD, were monitored using 17 km long single mode optical cable. Results imply a strong PMD dependence on thermal changes, imposing the exceeding 200 % of its value during the exposure to extreme temperatures and experienced more than 20 dB insertion losses in the optical system. The derived statistic is provided in the paper together with an evaluation of such as optical system reliability, which could be a crucial tool for the optical network designers. The environmental tests are further taken in context to our previously published results from long-term monitoring of fundamental parameters within an optical cable placed in a harsh environment in a special outdoor testbed. Finally, we provide a correlation between short-term and long-term monitoring campaigns and statistics, which are necessary for optical network safety and reliability.

Keywords: optical fiber, polarization mode dispersion, harsh environment, aging

Procedia PDF Downloads 363
38691 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models

Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue

Abstract:

Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.

Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation

Procedia PDF Downloads 237
38690 Using Emerging Hot Spot Analysis to Analyze Overall Effectiveness of Policing Policy and Strategy in Chicago

Authors: Tyler Gill, Sophia Daniels

Abstract:

The paper examines how accessing the spatial-temporal constrains of data will help inform policymakers and law enforcement officials. The authors utilize Chicago crime data from 2006-2016 to demonstrate how the Emerging Hot Spot Tool is an ideal hot spot clustering approach to analyze crime data. Traditional approaches include density maps or creating a spatial weights matrix to include the spatial-temporal constrains. This new approach utilizes a space-time implementation of the Getis-Ord Gi* statistic to visualize the data more quickly to make better decisions. The research will help complement socio-cultural research to find key patterns to help frame future policies and evaluate the implementation of prior strategies. Through this analysis, homicide trends and patterns are found more effectively and recommendations for use by non-traditional users of GIS are offered for real life implementation.

Keywords: crime mapping, emerging hot spot analysis, Getis-Ord Gi*, spatial-temporal analysis

Procedia PDF Downloads 230
38689 From the Sharing Economy to Social Manufacturing: Analyzing Collaborative Service Networks in the Manufacturing Domain

Authors: Babak Mohajeri

Abstract:

In recent years, the conventional business model of ownership has been changed towards accessibility in a variety of markets. Two trends can be observed in the evolution of this rental-like business model. Firstly, the technological development that enables the emergence of new business models. These new business models increasingly become agile and flexible. For example Spotify, an online music stream company provides consumers access to over millions of music tracks, conveniently through the smartphone, tablet or computer. Similarly, Car2Go, the car sharing company accesses its members with flexible and nearby sharing cars. The second trend is the increasing communication and connections via social networks. This trend enables a shift to peer-to-peer accessibility based business models. Conventionally, companies provide access for their customers to own companies products or services. In peer-to-peer model, nonetheless, companies facilitate access and connection across their customers to use other customers owned property or skills, competencies or services .The is so-called the sharing economy business model. The aim of this study is to investigate into a new and emerging type of the sharing economy model in which role of customers and service providers may dramatically change. This new model is called Collaborative Service Networks. We propose a mechanism for Collaborative Service Networks business model. Uber and Airbnb, two successful growing companies, have been selected for our case studies and their business models are analyzed. Finally, we study the emergence of the collaborative service networks in the manufacturing domain. Our finding results to a new manufacturing paradigm called social manufacturing.

Keywords: sharing economy, collaborative service networks, social manufacturing, manufacturing development

Procedia PDF Downloads 305
38688 Optimization of Process Parameters in Wire Electrical Discharge Machining of Inconel X-750 for Dimensional Deviation Using Taguchi Technique

Authors: Mandeep Kumar, Hari Singh

Abstract:

The effective optimization of machining process parameters affects dramatically the cost and production time of machined components as well as the quality of the final products. This paper presents the optimization aspects of a Wire Electrical Discharge Machining operation using Inconel X-750 as work material. The objective considered in this study is minimization of the dimensional deviation. Six input process parameters of WEDM namely spark gap voltage, pulse-on time, pulse-off time, wire feed rate, peak current and wire tension, were chosen as variables to study the process performance. Taguchi's design of experiments methodology has been used for planning and designing the experiments. The analysis of variance was carried out for raw data as well as for signal to noise ratio. Four input parameters and one two-factor interaction have been found to be statistically significant for their effects on the response of interest. The confirmation experiments were also performed for validating the predicted results.

Keywords: ANOVA, DOE, inconel, machining, optimization

Procedia PDF Downloads 190
38687 The Role of a Novel DEAD-Box Containing Protein in NLRP3 Inflammasome Activation

Authors: Yi-Hui Lai, Chih-Hsiang Yang, Li-Chung Hsu

Abstract:

The inflammasome is a protein complex that modulates caspase-1 activity, resulting in proteolytic cleavage of proinflammatory cytokines such as IL-1β and IL-18, into their bioactive forms. It has been shown that the inflammasomes play a crucial role in the clearance of pathogenic infection and tissue repair. However, dysregulated inflammasome activation contributes to a wide range of human diseases such as cancers and auto-inflammatory diseases. Yet, regulation of NLRP3 inflammasome activation remains largely unknown. We discovered a novel DEAD box protein, whose biological function has not been reported, not only negatively regulates NLRP3 inflammasome activation by interfering NLRP3 inflammasome assembly and cellular localization but also mitigate pyroptosis upon pathogen evasion. The DEAD-box protein is the first DEAD-box protein gets involved in modulation of the inflammasome activation. In our study, we found that caspase-1 activation and mature IL-1β production were largely enhanced upon LPS challenge in the DEAD box-containing protein- deleted THP-1 macrophages and bone marrow-derived macrophages (BMDMs). In addition, this DEAD box-containing protein migrates from the nucleus to the cytoplasm upon LPS stimulation, which is required for its inhibitory role in NLRP3 inflammasome activation. The DEAD box-containing protein specifically interacted with the LRR motif of NLRP3 via its DEAD domain. Furthermore, due to the crucial role of the NLRP3 LRR domain in the recruitment of NLRP3 to mitochondria and binding to its adaptor ASC, we found that the interaction of NLRP3 and ASC was downregulated in the presence of the DEAD box-containing protein. In addition to the mechanical study, we also found that this DEAD box protein protects host cells from inflammasome-triggered cell death in response to broad-ranging pathogens such as Candida albicans, Streptococcus pneumoniae, etc., involved in nosocomial infections and severe fever shock. Collectively, our results suggest that this novel DEAD box molecule might be a key therapeutic strategy for various infectious diseases.

Keywords: inflammasome, inflammation, innate immunity, pyroptosis

Procedia PDF Downloads 269
38686 The Optical OFDM Equalization Based on the Fractional Fourier Transform

Authors: A. Cherifi, B. S. Bouazza, A. O. Dahman, B. Yagoubi

Abstract:

Transmission over Optical channels will introduce inter-symbol interference (ISI) as well as inter-channel (or inter-carrier) interference (ICI). To decrease the effects of ICI, this paper proposes equalizer for the Optical OFDM system based on the fractional Fourier transform (FrFFT). In this FrFT-OFDM system, traditional Fourier transform is replaced by fractional Fourier transform to modulate and demodulate the data symbols. The equalizer proposed consists of sampling the received signal in the different time per time symbol. Theoretical analysis and numerical simulation are discussed.

Keywords: OFDM, fractional fourier transform, internet and information technology

Procedia PDF Downloads 386
38685 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 213
38684 A Posteriori Analysis of the Spectral Element Discretization of Heat Equation

Authors: Chor Nejmeddine, Ines Ben Omrane, Mohamed Abdelwahed

Abstract:

In this paper, we present a posteriori analysis of the discretization of the heat equation by spectral element method. We apply Euler's implicit scheme in time and spectral method in space. We propose two families of error indicators, both of which are built from the residual of the equation and we prove that they satisfy some optimal estimates. We present some numerical results which are coherent with the theoretical ones.

Keywords: heat equation, spectral elements discretization, error indicators, Euler

Procedia PDF Downloads 288
38683 Empirical Green’s Function Technique for Accelerogram Synthesis: The Problem of the Use for Marine Seismic Hazard Assessment

Authors: Artem A. Krylov

Abstract:

Instrumental seismological researches in water areas are complicated and expensive, that leads to the lack of strong motion records in most offshore regions. In the same time the number of offshore industrial infrastructure objects, such as oil rigs, subsea pipelines, is constantly increasing. The empirical Green’s function technique proved to be very effective for accelerograms synthesis under the conditions of poorly described seismic wave propagation medium. But the selection of suitable small earthquake record in offshore regions as an empirical Green’s function is a problem because of short seafloor instrumental seismological investigation results usually with weak micro-earthquakes recordings. An approach based on moving average smoothing in the frequency domain is presented for preliminary processing of weak micro-earthquake records before using it as empirical Green’s function. The method results in significant waveform correction for modeled event. The case study for 2009 L’Aquila earthquake was used to demonstrate the suitability of the method. This work was supported by the Russian Foundation of Basic Research (project № 18-35-00474 mol_a).

Keywords: accelerogram synthesis, empirical Green's function, marine seismology, microearthquakes

Procedia PDF Downloads 308
38682 Improving Human Hand Localization in Indoor Environment by Using Frequency Domain Analysis

Authors: Wipassorn Vinicchayakul, Pichaya Supanakoon, Sathaporn Promwong

Abstract:

A human’s hand localization is revised by using radar cross section (RCS) measurements with a minimum root mean square (RMS) error matching algorithm on a touchless keypad mock-up model. RCS and frequency transfer function measurements are carried out in an indoor environment on the frequency ranged from 3.0 to 11.0 GHz to cover federal communications commission (FCC) standards. The touchless keypad model is tested in two different distances between the hand and the keypad. The initial distance of 19.50 cm is identical to the heights of transmitting (Tx) and receiving (Rx) antennas, while the second distance is 29.50 cm from the keypad. Moreover, the effects of Rx angles relative to the hand of human factor are considered. The RCS input parameters are compared with power loss parameters at each frequency. From the results, the performance of the RCS input parameters with the second distance, 29.50 cm at 3 GHz is better than the others.

Keywords: radar cross section, fingerprint-based localization, minimum root mean square (RMS) error matching algorithm, touchless keypad model

Procedia PDF Downloads 332
38681 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.

Keywords: base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model

Procedia PDF Downloads 268
38680 The AI Method and System for Analyzing Wound Status in Wound Care Nursing

Authors: Ho-Hsin Lee, Yue-Min Jiang, Shu-Hui Tsai, Jian-Ren Chen, Mei-Yu XU, Wen-Tien Wu

Abstract:

This project presents an AI-based method and system for wound status analysis. The system uses a three-in-one sensor device to analyze wound status, including color, temperature, and a 3D sensor to provide wound information up to 2mm below the surface, such as redness, heat, and blood circulation information. The system has a 90% accuracy rate, requiring only one manual correction in 70% of cases, with a one-second delay. The system also provides an offline application that allows for manual correction of the wound bed range using color-based guidance to estimate wound bed size with 96% accuracy and a maximum of one manual correction in 96% of cases, with a one-second delay. Additionally, AI-assisted wound bed range selection achieves 100% of cases without manual intervention, with an accuracy rate of 76%, while AI-based wound tissue type classification achieves an 85.3% accuracy rate for five categories. The AI system also includes similar case search and expert recommendation capabilities. For AI-assisted wound range selection, the system uses WIFI6 technology, increasing data transmission speeds by 22 times. The project aims to save up to 64% of the time required for human wound record keeping and reduce the estimated time to assess wound status by 96%, with an 80% accuracy rate. Overall, the proposed AI method and system integrate multiple sensors to provide accurate wound information and offer offline and online AI-assisted wound bed size estimation and wound tissue type classification. The system decreases delay time to one second, reduces the number of manual corrections required, saves time on wound record keeping, and increases data transmission speed, all of which have the potential to significantly improve wound care and management efficiency and accuracy.

Keywords: wound status analysis, AI-based system, multi-sensor integration, color-based guidance

Procedia PDF Downloads 93
38679 Statistical Approach to Identify Stress and Biases Impairing Decision-Making in High-Risk Industry

Authors: Ph. Fauquet-Alekhine

Abstract:

Decision-making occurs several times an hour when working in high risk industry and an erroneous choice might have undesirable outcomes for people and the environment surrounding the industrial plant. Industrial decisions are very often made in a context of acute stress. Time pressure is a crucial stressor leading decision makers sometimes to boost up the decision-making process and if it is not possible then shift to the simplest strategy. We thus found it interesting to update the characterization of the stress factors impairing decision-making at Chinon Nuclear Power Plant (France) in order to optimize decision making contexts and/or associated processes. The investigation was based on the analysis of reports addressing safety events over the last 3 years. Among 93 reports, those explicitly addressing decision-making issues were identified. Characterization of each event was undertaken in terms of three criteria: stressors, biases impairing decision making and weaknesses of the decision-making process. The statistical analysis showed that biases were distributed over 10 possibilities among which the hypothesis confirmation bias was clearly salient. No significant correlation was found between criteria. The analysis indicated that the main stressor was time pressure and highlights an unexpected form of stressor: the trust asymmetry principle of the expert. The analysis led to the conclusion that this stressor impaired decision-making from a psychological angle rather than from a physiological angle: it induces defensive bias of self-esteem, self-protection associated with a bias of confirmation. This leads to the hypothesis that this stressor can intervene in some cases without being detected, and to the hypothesis that other stressors of the same kind might occur without being detected too. Further investigations addressing these hypotheses are considered. The analysis also led to the conclusion that dealing with these issues implied i) decision-making methods being well known to the workers and automated and ii) the decision-making tools being well known and strictly applied. Training was thus adjusted.

Keywords: bias, expert, high risk industry, stress.

Procedia PDF Downloads 98
38678 An Application of Content Analysis, SWOT Analysis, and the TOPSIS Method: A Case Study of the 'Tourism Ambassador' Program in Indonesia

Authors: Gilang Maulana Majid

Abstract:

If a government program remains scientifically uncontested for a long time, it is likely that its effects will be far from expected as there is no concrete evaluation of the steps being taken. This article identifies how such a theory aptly describes the case of the 'tourism ambassador' program in Indonesia. Being set out as one of the tourism promotional means of many regional governments in Indonesia, this program is heavily criticized for being ineffective despite a large number of budgets being spent on an annual basis. Taking the program as a case study, this article applies content analysis, SWOT analysis, and TOPSIS as data analysis methods, with a total of 56 tourism ambassadors invited to become coders, respondents, and/or interviewees in this research. The study reveals the SWOT of the program, recognizes four strategies that can be taken to optimize the program's effects and prioritizes a strategy based on the preferences of the involved tourism ambassadors using TOPSIS. It is found that incorporation of technology such as the creation of an online platform is, among others, the most expected approach to be taken to solve the problems concerning tourism ambassador program. However, based on the costs and benefits of each strategy presented in the current study, each alternative appears to have trade-offs between one and another.

Keywords: Indonesia, optimization strategies, 'Tourism Ambassador' program, SWOT-TOPSIS

Procedia PDF Downloads 147
38677 A Qualitative Study Examining the Process of EFL Course Design from the Perspectives of Teachers

Authors: Iman Al Khalidi

Abstract:

Recently, English has become the language of globalization and technology. In turn, this has resulted in a seemingly bewildering array of influences and trends in the domain of TESOL curriculum. In light of these changes, higher education has to provide a new and more powerful kind of education. It should prepare students to be more engaged citizens, more capable to solve complex problems at work, and well prepared to lead meaningful life. In response to this, universities, colleges, schools, and departments have to work out in light of the requirements and challenges of the global and technological era. Consequently they have to focus on the adoption of contemporary curriculum which goes in line with the pedagogical shifts from teaching –centered approach to learning centered approach. Ideally, there has been noticeable emphasis on the crucial importance of developing and professionalizing teachers in order to engage them in the process of curriculum development and action research. This is a qualitative study that aims at understanding and exploring the process of designing EFL courses by teachers at the tertiary level from the perspectives of the participants in a professional context in TESOL, Department of English, a private college in Oman. It is a case study that stands on the philosophy of the qualitative approach. It employs multi methods for collecting qualitative data: semi-structured interviews with teachers, focus group discussions with students, and document analysis. The collected data have been analyzed qualitatively by adopting Miles and Huberman's Approach using procedures of reduction, coding, displaying and conclusion drawing and verification.

Keywords: course design, components of course design, case study, data analysis

Procedia PDF Downloads 527
38676 A Qualitative Study Examining the Process of Course Design from the Perspectives of Teachers

Authors: Iman Al Khalidi

Abstract:

Recently, English has become the language of globalization and technology. In turn, this has resulted in a seemingly bewildering array of influences and trends in the domain of TESOL curriculum. In light of these changes, higher education has to provide a new and more powerful kind of education. It should prepare students to be more engaged citizens, more capable to solve complex problems at work, and well prepared to lead a meaningful life. In response to this, universities, colleges, schools, and departments have to work out in light of the requirements and challenges of the global and technological era. Consequently, they have to focus on the adoption of contemporary curriculum which goes in line with the pedagogical shifts from teaching –centered approach to learning centered approach. Ideally, there has been noticeable emphasis on the crucial importance of developing and professionalizing teachers in order to engage them in the process of curriculum development and action research. This is a qualitative study that aims at understanding and exploring the process of designing EFL courses by teachers at the tertiary level from the perspectives of the participants in a professional context in TESOL, Department of English, a private college in Oman. It is a case study that stands on the philosophy of the qualitative approach. It employs multi-methods for collecting qualitative data: semi-structured interviews with teachers, focus group discussions with students, and document analysis. The collected data have been analyzed qualitatively by adopting Miles and Huberman's Approach using procedures of reduction, coding, displaying, and conclusion drawing and verification.

Keywords: course design, components of course design, case study, data analysis

Procedia PDF Downloads 432
38675 Classification of Random Doppler-Radar Targets during the Surveillance Operations

Authors: G. C. Tikkiwal, Mukesh Upadhyay

Abstract:

During the surveillance operations at war or peace time, the Radar operator gets a scatter of targets over the screen. This may be a tracked vehicle like tank vis-à-vis T72, BMP etc, or it may be a wheeled vehicle like ALS, TATRA, 2.5Tonne, Shaktiman or moving the army, moving convoys etc. The radar operator selects one of the promising targets into single target tracking (STT) mode. Once the target is locked, the operator gets a typical audible signal into his headphones. With reference to the gained experience and training over the time, the operator then identifies the random target. But this process is cumbersome and is solely dependent on the skills of the operator, thus may lead to misclassification of the object. In this paper, we present a technique using mathematical and statistical methods like fast fourier transformation (FFT) and principal component analysis (PCA) to identify the random objects. The process of classification is based on transforming the audible signature of target into music octave-notes. The whole methodology is then automated by developing suitable software. This automation increases the efficiency of identification of the random target by reducing the chances of misclassification. This whole study is based on live data.

Keywords: radar target, FFT, principal component analysis, eigenvector, octave-notes, DSP

Procedia PDF Downloads 382
38674 Importance of Road Infrastructure on the People Live in Afghanistan

Authors: Mursal Ibrahim Zada

Abstract:

Since 2001, the new Government of Afghanistan has put the improvement of transportation in rural area as one of the key issues for the development of the country. Since then, about 17,000 km of rural roads were planned to be constructed in the entire country. This thesis will assess the impact of rural road improvement on the development of rural communities and housing facilities. Specifically, this study aims to show that the improved road has leads to an improvement in the community, which in turn has a positive effect on the lives of rural people. To obtain this goal, a questionnaire survey was conducted in March 2015 to the residents of four different districts of Kabul province, Afghanistan, where the road projects were constructed in recent years. The collected data was analyzed using on a regression analysis considering different factors such as land price, waiting time at the station, travel time to the city, number of employed family members and so on. Three models are developed to demonstrate the relationship between different factors before and after the improvement of rural transportation. The results showed a significant change positively in the value of land price and housing facilities, travel time to the city, waiting time at the station, number of employed family members, fare per trip to the city, and number of trips to the city per month after the pavement of the road. The results indicated that the improvement of transportation has a significant impact on the improvement of the community in different parts, especially on the price of land and housing facility and travel time to the city.

Keywords: accessibility, Afghanistan, housing facility, rural area, land price

Procedia PDF Downloads 250
38673 Effect of Modulation Factors on Tomotherapy Plans and Their Quality Analysis

Authors: Asawari Alok Pawaskar

Abstract:

This study was aimed at investigating quality assurance (QA) done with IBA matrix, the discrepan­cies observed for helical tomotherapy plans. A selection of tomotherapy plans that initially failed the with Matrix process was chosen for this investigation. These plans failed the fluence analysis as assessed using gamma criteria (3%, 3 mm). Each of these plans was modified (keeping the planning constraints the same), beamlets rebatched and reoptimized. By increasing and decreasing the modula­tion factor, the fluence in a circumferential plane as measured with a diode array was assessed. A subset of these plans was investigated using varied pitch values. Factors for each plan that were examined were point doses, fluences, leaf opening times, planned leaf sinograms, and uniformity indices. In order to ensure that the treatment constraints remained the same, the dose-volume histograms (DVHs) of all the modulated plans were compared to the original plan. It was observed that a large increase in the modulation factor did not significantly improve DVH unifor­mity, but reduced the gamma analysis pass rate. This also increased the treatment delivery time by slowing down the gantry rotation speed which then increases the maximum to mean non-zero leaf open time ratio. Increasing and decreasing the pitch value did not substantially change treatment time, but the delivery accuracy was adversely affected. This may be due to many other factors, such as the complexity of the treatment plan and site. Patient sites included in this study were head and neck, breast, abdomen. The impact of leaf tim­ing inaccuracies on plans was greater with higher modulation factors. Point-dose measurements were seen to be less susceptible to changes in pitch and modulation factors. The initial modulation factor used by the optimizer, such that the TPS generated ‘actual’ modulation factor within the range of 1.4 to 2.5, resulted in an improved deliverable plan.

Keywords: dose volume histogram, modulation factor, IBA matrix, tomotherapy

Procedia PDF Downloads 163
38672 The Influence of Imposter Phenomenon on the Experiences of Intimacy in Non-Binary Young Adults

Authors: Muskan Jain, Baiju Gopal

Abstract:

Objectives: Intimacy in interpersonal relationships is integral to psychological health and everyday wellbeing; the focus is on intimacy, which can be described as feelings of closeness, connection, and belonging within relationships, which is influenced by an individual's gender identity as well as life experiences. The study aims to explore the experiences of intimacy of the non-binary gender; this marginalized community has increased risks of developing the imposter phenomenon. The study explores the influence of IP on the development and sustenance of intimacy in relationships. Methods: The present study accumulates detailed narratives from 10 non-binary young adults ages 18 to 25 in metropolitan cities of India. Thematic analysis was used for the data analysis. Results: Seven major themes have emerged revolving around internalized criticism and self-depreciating behavior, which causes distance between partners. The four themes that result in the internalization of criticism are lack of social stability, invalidation by social units, adverse life experiences, and estrangement due to gender identity. Three themes that encapsulate major difficulties in relationships are limited self-disclosure, inhibition of physical needs, and fear of taking space. The findings have been critically compared and contrasted with the existing body of literature in the domain, which sets the agenda for further inquiry. Conclusion: It is important for future studies to capture the experiences of non-binary genders in India to provide better therapeutic support in order to assist them in forming meaningful and authentic relationships, thus increasing overall wellbeing.

Keywords: imposter phenomenon, intimacy, internalized criticism, marginalized community

Procedia PDF Downloads 45
38671 Heuristic Classification of Hydrophone Recordings

Authors: Daniel M. Wolff, Patricia Gray, Rafael de la Parra Venegas

Abstract:

An unsupervised machine listening system is constructed and applied to a dataset of 17,195 30-second marine hydrophone recordings. The system is then heuristically supplemented with anecdotal listening, contextual recording information, and supervised learning techniques to reduce the number of false positives. Features for classification are assembled by extracting the following data from each of the audio files: the spectral centroid, root-mean-squared values for each frequency band of a 10-octave filter bank, and mel-frequency cepstral coefficients in 5-second frames. In this way both time- and frequency-domain information are contained in the features to be passed to a clustering algorithm. Classification is performed using the k-means algorithm and then a k-nearest neighbors search. Different values of k are experimented with, in addition to different combinations of the available feature sets. Hypothesized class labels are 'primarily anthrophony' and 'primarily biophony', where the best class result conforming to the former label has 104 members after heuristic pruning. This demonstrates how a large audio dataset has been made more tractable with machine learning techniques, forming the foundation of a framework designed to acoustically monitor and gauge biological and anthropogenic activity in a marine environment.

Keywords: anthrophony, hydrophone, k-means, machine learning

Procedia PDF Downloads 155
38670 Tabu Search Algorithm for Ship Routing and Scheduling Problem with Time Window

Authors: Khaled Moh. Alhamad

Abstract:

This paper describes a tabu search heuristic for a ship routing and scheduling problem (SRSP). The method was developed to address the problem of loading cargos for many customers using heterogeneous vessels. Constraints relate to delivery time windows imposed by customers, the time horizon by which all deliveries must be made and vessel capacities. The results of a computational investigation are presented. Solution quality and execution time are explored with respect to problem size and parameters controlling the tabu search such as tenure and neighbourhood size.

Keywords: heuristic, scheduling, tabu search, transportation

Procedia PDF Downloads 494
38669 A Study on Characteristics of Runoff Analysis Methods at the Time of Rainfall in Rural Area, Okinawa Prefecture Part 2: A Case of Kohatu River in South Central Part of Okinawa Pref

Authors: Kazuki Kohama, Hiroko Ono

Abstract:

The rainfall in Japan is gradually increasing every year according to Japan Meteorological Agency and Intergovernmental Panel on Climate Change Fifth Assessment Report. It means that the rainfall difference between rainy season and non-rainfall is increasing. In addition, the increasing trend of strong rain for a short time clearly appears. In recent years, natural disasters have caused enormous human injuries in various parts of Japan. Regarding water disaster, local heavy rain and floods of large rivers occur frequently, and it was decided on a policy to promote hard and soft sides as emergency disaster prevention measures with water disaster prevention awareness social reconstruction vision. Okinawa prefecture in subtropical region has torrential rain and water disaster several times a year such as river flood, in which is caused in specific rivers from all 97 rivers. Also, the shortage of capacity and narrow width are characteristic of river in Okinawa and easily cause river flood in heavy rain. This study focuses on Kohatu River that is one of the specific rivers. In fact, the water level greatly rises over the river levee almost once a year but non-damage of buildings around. On the other hand in some case, the water level reaches to ground floor height of house and has happed nine times until today. The purpose of this research is to figure out relationship between precipitation, surface outflow and total treatment water quantity of Kohatu River. For the purpose, we perform hydrological analysis although is complicated and needs specific details or data so that, the method is mainly using Geographic Information System software and outflow analysis system. At first, we extract watershed and then divided to 23 catchment areas to understand how much surface outflow flows to runoff point in each 10 minutes. On second, we create Unit Hydrograph indicating the area of surface outflow with flow area and time. This index shows the maximum amount of surface outflow at 2400 to 3000 seconds. Lastly, we compare an estimated value from Unit Hydrograph to a measured value. However, we found that measure value is usually lower than measured value because of evaporation and transpiration. In this study, hydrograph analysis was performed using GIS software and outflow analysis system. Based on these, we could clarify the flood time and amount of surface outflow.

Keywords: disaster prevention, water disaster, river flood, GIS software

Procedia PDF Downloads 127