Search results for: event methodology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6391

Search results for: event methodology

5701 Probabilistic and Stochastic Analysis of a Retaining Wall for C-Φ Soil Backfill

Authors: André Luís Brasil Cavalcante, Juan Felix Rodriguez Rebolledo, Lucas Parreira de Faria Borges

Abstract:

A methodology for the probabilistic analysis of active earth pressure on retaining wall for c-Φ soil backfill is described in this paper. The Rosenblueth point estimate method is used to measure the failure probability of a gravity retaining wall. The basic principle of this methodology is to use two point estimates, i.e., the standard deviation and the mean value, to examine a variable in the safety analysis. The simplicity of this framework assures to its wide application. For the calculation is required 2ⁿ repetitions during the analysis, since the system is governed by n variables. In this study, a probabilistic model based on the Rosenblueth approach for the computation of the overturning probability of failure of a retaining wall is presented. The obtained results have shown the advantages of this kind of models in comparison with the deterministic solution. In a relatively easy way, the uncertainty on the wall and fill parameters are taken into account, and some practical results can be obtained for the retaining structure design.

Keywords: retaining wall, active earth pressure, backfill, probabilistic analysis

Procedia PDF Downloads 411
5700 In-Service Training to Enhance Community Based Corrections

Authors: Varathagowry Vasudevan

Abstract:

This paper attempts to demonstrate the importance of capacity building of the para-professionals in community based corrections for enhancing family and child welfare as a crucial factor in providing in-service training as a responsive methodology in community based corrections to enhance the best practices. The Diploma programme in community-based corrections initiated by the National Institute of Social Development has been engaged in this noble task of training quality personnel knowledgeable in the best practices and fieldwork skills on community-based correction and its best practice. To protect the families and children and enhance best practices, National Institute of Social Development with support from the department of community-based corrections initiated a Diploma programme in community-based corrections to enhance and update the knowledge, skills, attitudes with the right mindset of the work supervisors employed at the department of community-based corrections. This study based on reflective practice illustrated the effectiveness of curriculum of in-service training programme as a tool to enhance the capacities of the relevant officers in Sri Lanka. The data for the study was obtained from participants and coordinator through classroom discussions and key informant interviews. This study showed that use of appropriate tailor-made curriculum and field practice manual by the officers during the training was very much dependent on the provision of appropriate administrative facilities, passion, teaching methodology that promote capacity to involve best practices. It also demonstrated further the fact that professional social work response, strengthening families within legal framework was very much grounded in the adoption of proper skills imbibed through training in appropriate methodology practiced in the field under guided supervision.

Keywords: capacity building, community-based corrections, in-service training, paraprofessionals

Procedia PDF Downloads 153
5699 Understanding Hydrodynamic in Lake Victoria Basin in a Catchment Scale: A Literature Review

Authors: Seema Paul, John Mango Magero, Prosun Bhattacharya, Zahra Kalantari, Steve W. Lyon

Abstract:

The purpose of this review paper is to develop an understanding of lake hydrodynamics and the potential climate impact on the Lake Victoria (LV) catchment scale. This paper briefly discusses the main problems of lake hydrodynamics and its’ solutions that are related to quality assessment and climate effect. An empirical methodology in modeling and mapping have considered for understanding lake hydrodynamic and visualizing the long-term observational daily, monthly, and yearly mean dataset results by using geographical information system (GIS) and Comsol techniques. Data were obtained for the whole lake and five different meteorological stations, and several geoprocessing tools with spatial analysis are considered to produce results. The linear regression analyses were developed to build climate scenarios and a linear trend on lake rainfall data for a long period. A potential evapotranspiration rate has been described by the MODIS and the Thornthwaite method. The rainfall effect on lake water level observed by Partial Differential Equations (PDE), and water quality has manifested by a few nutrients parameters. The study revealed monthly and yearly rainfall varies with monthly and yearly maximum and minimum temperatures, and the rainfall is high during cool years and the temperature is high associated with below and average rainfall patterns. Rising temperatures are likely to accelerate evapotranspiration rates and more evapotranspiration is likely to lead to more rainfall, drought is more correlated with temperature and cloud is more correlated with rainfall. There is a trend in lake rainfall and long-time rainfall on the lake water surface has affected the lake level. The onshore and offshore have been concentrated by initial literature nutrients data. The study recommended that further studies should consider fully lake bathymetry development with flow analysis and its’ water balance, hydro-meteorological processes, solute transport, wind hydrodynamics, pollution and eutrophication these are crucial for lake water quality, climate impact assessment, and water sustainability.

Keywords: climograph, climate scenarios, evapotranspiration, linear trend flow, rainfall event on LV, concentration

Procedia PDF Downloads 94
5698 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines

Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma

Abstract:

Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.

Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)

Procedia PDF Downloads 267
5697 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 132
5696 Formal Development of Electronic Identity Card System Using Event-B

Authors: Tomokazu Nagata, Jawid Ahmad Baktash

Abstract:

The goal of this paper is to explore the use of formal methods for Electronic Identity Card System. Nowadays, one of the core research directions in a constantly growing distributed environment is the improvement of the communication process. The responsibility for proper verification becomes crucial. Formal methods can play an essential role in the development and testing of systems. The thesis presents two different methodologies for assessing correctness. Our first approach employs abstract interpretation techniques for creating a trace based model for Electronic Identity Card System. The model was used for building a semi decidable procedure for verifying the system model. We also developed the code for the eID System and can cover three parts login to system sending of Acknowledgment from user side, receiving of all information from server side and log out from system. The new concepts of impasse and spawned sessions that we introduced led our research to original statements about the intruder’s knowledge and eID system coding with respect to secrecy. Furthermore, we demonstrated that there is a bound on the number of sessions needed for the analysis of System.Electronic identity (eID) cards promise to supply a universal, nation-wide mechanism for user authentication. Most European countries have started to deploy eID for government and private sector applications. Are government-issued electronic ID cards the proper way to authenticate users of online services? We use the eID project as a showcase to discuss eID from an application perspective. The new eID card has interesting design features, it is contact-less, it aims to protect people’s privacy to the extent possible, and it supports cryptographically strong mutual authentication between users and services. Privacy features include support for pseudonymous authentication and per service controlled access to individual data items. The article discusses key concepts, the eID infrastructure, observed and expected problems, and open questions. The core technology seems ready for prime time and government projects deploy it to the masses. But application issues may hamper eID adoption for online applications.

Keywords: eID, event-B, Pro-B, formal method, message passing

Procedia PDF Downloads 229
5695 A Basic Modeling Approach for the 3D Protein Structure of Insulin

Authors: Daniel Zarzo Montes, Manuel Zarzo Castelló

Abstract:

Proteins play a fundamental role in biology, but their structure is complex, and it is a challenge for teachers to conceptually explain the differences between their primary, secondary, tertiary, and quaternary structures. On the other hand, there are currently many computer programs to visualize the 3D structure of proteins, but they require advanced training and knowledge. Moreover, it becomes difficult to visualize the sequence of amino acids in these models, and how the protein conformation is reached. Given this drawback, a simple and instructive procedure is proposed in order to teach the protein structure to undergraduate and graduate students. For this purpose, insulin has been chosen because it is a protein that consists of 51 amino acids, a relatively small number. The methodology has consisted of the use of plastic atom models, which are frequently used in organic chemistry and biochemistry to explain the chirality of biomolecules. For didactic purposes, when the aim is to teach the biochemical foundations of proteins, a manipulative system seems convenient, starting from the chemical structure of amino acids. It has the advantage that the bonds between amino acids can be conveniently rotated, following the pattern marked by the 3D models. First, the 51 amino acids were modeled, and then they were linked according to the sequence of this protein. Next, the three disulfide bonds that characterize the stability of insulin have been established, and then the alpha-helix structure has been formed. In order to reach the tertiary 3D conformation of this protein, different interactive models available on the Internet have been visualized. In conclusion, the proposed methodology seems very suitable for biology and biochemistry students because they can learn the fundamentals of protein modeling by means of a manipulative procedure as a basis for understanding the functionality of proteins. This methodology would be conveniently useful for a biology or biochemistry laboratory practice, either at the pre-graduate or university level.

Keywords: protein structure, 3D model, insulin, biomolecule

Procedia PDF Downloads 49
5694 Narratives in Science as Covert Prestige Indicators

Authors: Zinaida Shelkovnikova

Abstract:

The language in science is changing and meets the demands of the society. We shall argue that in the varied modern world there are important reasons for the integration of narratives into scientific discourse. As far as nowadays scientists are faced with extremely prompt science development and progress; modern scientific society lives in the conditions of tough competition. The integration of narratives into scientific discourse is thus a good way to prompt scientific experience to different audiences and to express covert prestige of the discourse. Narratives also form the identity of the persuasive narrator. Using the narrative approach to the scientific discourse analysis we reveal the sociocultural diversity of the scientists. If you want to attract audience’s attention to your scientific research, narratives should be integrated into your scientific discourse. Those who understand this consistent pattern are considered the leading scientists. Taking into account that it is prestigious to be renowned, celebrated in science, it is a covert prestige to write narratives in science. We define a science narrative as the intentional, consequent, coherent, event discourse or a discourse fragment, which contains the author creativity, in some cases intrigue, and gives mostly qualitative information (compared with quantitative data) in order to provide maximum understanding of the research. Science narratives also allow the effective argumentation and consequently construct the identity of the persuasive narrator. However, skills of creating appropriate scientific discourse reflect the level of prestige. In order to teach postgraduate students to be successful in English scientific writing and to be prestigious in the scientific society, we have defined the science narrative and outlined its main features and characteristics. Narratives contribute to audience’s involvement with the narrator and his/her narration. In general, the way in which a narrative is performed may result in (limited or greater) contact with the audience. To gain these aim authors use emotional fictional elements; descriptive elements: adjectives; adverbs; comparisons and so on; author’s evaluative elements. Thus, the features of science narrativity are the following: descriptive tools; authors evaluation; qualitative information exceeds the quantitative data; facts take the event status; understandability; accessibility; creativity; logics; intrigue; esthetic nature; fiction. To conclude, narratives function covert prestige of the scientific discourse and shape the identity of the persuasive scientist.

Keywords: covert prestige, narrativity, scientific discourse, scientific narrative

Procedia PDF Downloads 397
5693 Generation of Automated Alarms for Plantwide Process Monitoring

Authors: Hyun-Woo Cho

Abstract:

Earlier detection of incipient abnormal operations in terms of plant-wide process management is quite necessary in order to improve product quality and process safety. And generating warning signals or alarms for operating personnel plays an important role in process automation and intelligent plant health monitoring. Various methodologies have been developed and utilized in this area such as expert systems, mathematical model-based approaches, multivariate statistical approaches, and so on. This work presents a nonlinear empirical monitoring methodology based on the real-time analysis of massive process data. Unfortunately, the big data includes measurement noises and unwanted variations unrelated to true process behavior. Thus the elimination of such unnecessary patterns of the data is executed in data processing step to enhance detection speed and accuracy. The performance of the methodology was demonstrated using simulated process data. The case study showed that the detection speed and performance was improved significantly irrespective of the size and the location of abnormal events.

Keywords: detection, monitoring, process data, noise

Procedia PDF Downloads 245
5692 Development of National Scale Hydropower Resource Assessment Scheme Using SWAT and Geospatial Techniques

Authors: Rowane May A. Fesalbon, Greyland C. Agno, Jodel L. Cuasay, Dindo A. Malonzo, Ma. Rosario Concepcion O. Ang

Abstract:

The Department of Energy of the Republic of the Philippines estimates that the country’s energy reserves for 2015 are dwindling– observed in the rotating power outages in several localities. To aid in the energy crisis, a national hydropower resource assessment scheme is developed. Hydropower is a resource that is derived from flowing water and difference in elevation. It is a renewable energy resource that is deemed abundant in the Philippines – being an archipelagic country that is rich in bodies of water and water resources. The objectives of this study is to develop a methodology for a national hydropower resource assessment using hydrologic modeling and geospatial techniques in order to generate resource maps for future reference and use of the government and other stakeholders. The methodology developed for this purpose is focused on two models – the implementation of the Soil and Water Assessment Tool (SWAT) for the river discharge and the use of geospatial techniques to analyze the topography and obtain the head, and generate the theoretical hydropower potential sites. The methodology is highly coupled with Geographic Information Systems to maximize the use of geodatabases and the spatial significance of the determined sites. The hydrologic model used in this workflow is SWAT integrated in the GIS software ArcGIS. The head is determined by a developed algorithm that utilizes a Synthetic Aperture Radar (SAR)-derived digital elevation model (DEM) which has a resolution of 10-meters. The initial results of the developed workflow indicate hydropower potential in the river reaches ranging from pico (less than 5 kW) to mini (1-3 MW) theoretical potential.

Keywords: ArcSWAT, renewable energy, hydrologic model, hydropower, GIS

Procedia PDF Downloads 310
5691 Investment Projects Selection Problem under Hesitant Fuzzy Environment

Authors: Irina Khutsishvili

Abstract:

In the present research, a decision support methodology for the multi-attribute group decision-making (MAGDM) problem is developed, namely for the selection of investment projects. The objective of the investment project selection problem is to choose the best project among the set of projects, seeking investment, or to rank all projects in descending order. The project selection is made considering a set of weighted attributes. To evaluate the attributes in our approach, expert assessments are used. In the proposed methodology, lingual expressions (linguistic terms) given by all experts are used as initial attribute evaluations, since they are the most natural and convenient representation of experts' evaluations. Then lingual evaluations are converted into trapezoidal fuzzy numbers, and the aggregate trapezoidal hesitant fuzzy decision matrix will be built. The case is considered when information on the attribute weights is completely unknown. The attribute weights are identified based on the De Luca and Termini information entropy concept, determined in the context of hesitant fuzzy sets. The decisions are made using the extended Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) method under a hesitant fuzzy environment. Hence, a methodology is based on a trapezoidal valued hesitant fuzzy TOPSIS decision-making model with entropy weights. The ranking of alternatives is performed by the proximity of their distances to both the fuzzy positive-ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS). For this purpose, the weighted hesitant Hamming distance is used. An example of investment decision-making is shown that clearly explains the procedure of the proposed methodology.

Keywords: In the present research, a decision support methodology for the multi-attribute group decision-making (MAGDM) problem is developed, namely for the selection of investment projects. The objective of the investment project selection problem is to choose the best project among the set of projects, seeking investment, or to rank all projects in descending order. The project selection is made considering a set of weighted attributes. To evaluate the attributes in our approach, expert assessments are used. In the proposed methodology, lingual expressions (linguistic terms) given by all experts are used as initial attribute evaluations since they are the most natural and convenient representation of experts' evaluations. Then lingual evaluations are converted into trapezoidal fuzzy numbers, and the aggregate trapezoidal hesitant fuzzy decision matrix will be built. The case is considered when information on the attribute weights is completely unknown. The attribute weights are identified based on the De Luca and Termini information entropy concept, determined in the context of hesitant fuzzy sets. The decisions are made using the extended Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) method under a hesitant fuzzy environment. Hence, a methodology is based on a trapezoidal valued hesitant fuzzy TOPSIS decision-making model with entropy weights. The ranking of alternatives is performed by the proximity of their distances to both the fuzzy positive-ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS). For this purpose, the weighted hesitant Hamming distance is used. An example of investment decision-making is shown that clearly explains the procedure of the proposed methodology.

Procedia PDF Downloads 115
5690 Barriers and Enablers to Public Innovation in the Central Region of Colombia: A Characterization from Measurement through the Item Response Methodology and Comparative Analysis

Authors: Yessenia Parrado, Ana Barbosa, Daniela Mahe, Sebastian Toro, Jhon Garcia

Abstract:

The purpose of this work is to present the identification and characterization of the barriers and enablers to public innovation in the Central Region of Colombia from a mixed methodology in a research carried out in 2020 by the Laboratory of Innovation, Creativity and New Technologies of the National University of Colombia in alliance with the National Planning Department. Based on the research, the index of barriers to regional and departmental public innovation was built, which reflects the level of difficulty of the territorial entities to overcome the barriers present around three dimensions: organizational structure of the entity, generation of public value, and governance processes. The index was built from the item response methodology and the multiple correspondence analysis from the application of an institutional information form for public entities and a perception form for public servants. This investigation had the participation of 36 entities and 1038 servers and servants from the departments of Huila, Meta, Boyacá, Cundinamarca, Tolima, and the Capital District. In this exercise, it was identified that the departmental indices range between 13 and 44 and that the regional index was 30 out of 100. From the analysis of the information, it was possible to establish that the main barriers are the lack of specialized agencies for public innovation exercises, lack of qualified personnel and work methodologies for public innovation, inadequate information management, lack of feedback between the learning from governmental and non-governmental entities, the inability of the initiatives to generate binding participation mechanisms and the lack of qualification of citizens to participate in these processes.

Keywords: item response, public innovation, quantitative analysis, compared analysis

Procedia PDF Downloads 122
5689 Advanced Mechatronic Design of Robot Manipulator Using Hardware-In-The-Loop Simulation

Authors: Reza Karami, Ali Akbar Ebrahimi

Abstract:

This paper discusses concurrent engineering of robot manipulators, based on the Holistic Concurrent Design (HCD) methodology and by using a hardware-in-the-loop simulation platform. The methodology allows for considering numerous design variables with different natures concurrently. It redefines the ultimate goal of design based on the notion of satisfaction, resulting in the simplification of the multi-objective constrained optimization process. It also formalizes the effect of designer’s subjective attitude in the process. To enhance modeling efficiency for both computation and accuracy, a hardware-in-the-loop simulation platform is used, which involves physical joint modules and the control unit in addition to the software modules. This platform is implemented in the HCD design architecture to reliably evaluate the design attributes and performance super criterion during the design process. The resulting overall architecture is applied to redesigning kinematic, dynamic and control parameters of an industrial robot manipulator.

Keywords: concurrent engineering, hardware-in-the-loop simulation, robot manipulator, multidisciplinary systems, mechatronics

Procedia PDF Downloads 450
5688 X-Ray Energy Release in the Solar Eruptive Flare from 6th of September 2012

Authors: Mirabbos Mirkamalov, Zavkiddin Mirtoshev

Abstract:

The M 1.6 class flare occurred on 6th of September 2012. Our observations correspond to the active region NOAA 11560 with the heliographic coordinates N04W71. The event took place between 04:00 UT and 04:45 UT, and was close to the solar limb at the western region. The flare temperature correlates with flux peak, increases for a short period (between 04:08 UT and 04:12 UT), rises impulsively, attains a maximum value of about 17 MK at 04:12 UT and gradually decreases after peak value. Around the peak we observe significant emissions of X-ray sources. Flux profiles of the X-ray emission exhibit a progressively faster raise and decline as the higher energy channels are considered.

Keywords: magnetic reconnection, solar atmosphere, solar flare, X-ray emission

Procedia PDF Downloads 319
5687 Seismic Performance of Isolated Bridge Configurations with Soil Structure Interaction

Authors: Davide Forcellini

Abstract:

The most recent development of earthquake engineering is based on concept of design consisting in prescribed performance rather than the more traditional prescriptive approaches. The paper aims to assess the effects of isolation devices and soil structure interaction on a benchmark bridge adopting a Performance-Based Earthquake Engineering methodology. Several isolated configurations of abutments and pier connections are compared performing the most representative isolation devices. Isolation systems suitability depends on many factors, mainly connected with ground effects. In this regard, the second purpose of this paper is to assess the effects of soil-structure interaction (SSI) on the studied bridge configurations. Contributions of isolation technique and soil structure interaction are assessed evaluating the resistance effects applied to Peak Ground Acceleration (PGA) levels in terms of cost and time repair quantities.

Keywords: base isolation, bridge, earthquake engineering, non linearity, PBEE methodology, seismic assessment, soil structure interaction

Procedia PDF Downloads 425
5686 An Artificial Neural Network Model Based Study of Seismic Wave

Authors: Hemant Kumar, Nilendu Das

Abstract:

A study based on ANN structure gives us the information to predict the size of the future in realizing a past event. ANN, IMD (Indian meteorological department) data and remote sensing were used to enable a number of parameters for calculating the size that may occur in the future. A threshold selected specifically above the high-frequency harvest reached the area during the selected seismic activity. In the field of human and local biodiversity it remains to obtain the right parameter compared to the frequency of impact. But during the study the assumption is that predicting seismic activity is a difficult process, not because of the parameters involved here, which can be analyzed and funded in research activity.

Keywords: ANN, Bayesion class, earthquakes, IMD

Procedia PDF Downloads 122
5685 Degumming of Eri Silk Fabric with Ionic Liquid

Authors: Shweta K. Vyas, Rakesh Musale, Sanjeev R. Shukla

Abstract:

Eri silk is a non mulberry silk which is obtained without killing the silkworms and hence it is also known as Ahmisa silk. In the present study, the results on degumming of eri silk with alkaline peroxide have been compared with those obtained by using ionic liquid (IL) 1-Butyl-3-methylimidazolium chloride [BMIM]Cl. Experiments were designed to find out the optimum processing parameters for degumming of eri silk by response surface methodology. The statistical software, Design-Expert 6.0 was used for regression analysis and graphical analysis of the responses obtained by running the set of designed experiments. Analysis of variance (ANOVA) was used to estimate the statistical parameters. The polynomial equation of quadratic order was employed to fit the experimental data. The quality and model terms were evaluated by F-test. Three dimensional surface plots were prepared to study the effect of variables on different responses. The optimum conditions for IL treatment were selected from predicted combinations and the experiments were repeated under these conditions to determine the reproducibility.

Keywords: silk degumming, ionic liquid, response surface methodology, ANOVA

Procedia PDF Downloads 585
5684 Model-Based Process Development for the Comparison of a Radial Riveting and Roller Burnishing Process in Mechanical Joining Technology

Authors: Tobias Beyer, Christoph Friedrich

Abstract:

Modern simulation methodology using finite element models is nowadays a recognized tool for product design/optimization. Likewise, manufacturing process design is increasingly becoming the focus of simulation methodology in order to enable sustainable results based on reduced real-life tests here as well. In this article, two process simulations -radial riveting and roller burnishing- used for mechanical joining of components are explained. In the first step, the required boundary conditions are developed and implemented in the respective simulation models. This is followed by process space validation. With the help of the validated models, the interdependencies of the input parameters are investigated and evaluated by means of sensitivity analyses. Limit case investigations are carried out and evaluated with the aid of the process simulations. Likewise, a comparison of the two joining methods to each other becomes possible.

Keywords: FEM, model-based process development, process simulation, radial riveting, roller burnishing, sensitivity analysis

Procedia PDF Downloads 103
5683 The Cardiac Diagnostic Prediction Applied to a Designed Holter

Authors: Leonardo Juan Ramírez López, Javier Oswaldo Rodriguez Velasquez

Abstract:

We have designed a Holter that measures the heart´s activity for over 24 hours, implemented a prediction methodology, and generate alarms as well as indicators to patients and treating physicians. Various diagnostic advances have been developed in clinical cardiology thanks to Holter implementation; however, their interpretation has largely been conditioned to clinical analysis and measurements adjusted to diverse population characteristics, thus turning it into a subjective examination. This, however, requires vast population studies to be validated that, in turn, have not achieved the ultimate goal: mortality prediction. Given this context, our Insight Research Group developed a mathematical methodology that assesses cardiac dynamics through entropy and probability, creating a numerical and geometrical attractor which allows quantifying the normalcy of chronic and acute disease as well as the evolution between such states, and our Tigum Research Group developed a holter device with 12 channels and advanced computer software. This has been shown in different contexts with 100% sensitivity and specificity results.

Keywords: attractor , cardiac, entropy, holter, mathematical , prediction

Procedia PDF Downloads 165
5682 Investigation for Pixel-Based Accelerated Aging of Large Area Picosecond Photo-Detectors

Authors: I. Tzoka, V. A. Chirayath, A. Brandt, J. Asaadi, Melvin J. Aviles, Stephen Clarke, Stefan Cwik, Michael R. Foley, Cole J. Hamel, Alexey Lyashenko, Michael J. Minot, Mark A. Popecki, Michael E. Stochaj, S. Shin

Abstract:

Micro-channel plate photo-multiplier tubes (MCP-PMTs) have become ubiquitous and are widely considered potential candidates for next generation High Energy Physics experiments due to their picosecond timing resolution, ability to operate in strong magnetic fields, and low noise rates. A key factor that determines the applicability of MCP-PMTs in their lifetime, especially when they are used in high event rate experiments. We have developed a novel method for the investigation of the aging behavior of an MCP-PMT on an accelerated basis. The method involves exposing a localized region of the MCP-PMT to photons at a high repetition rate. This pixel-based method was inspired by earlier results showing that damage to the photocathode of the MCP-PMT occurs primarily at the site of light exposure and that the surrounding region undergoes minimal damage. One advantage of the pixel-based method is that it allows the dynamics of photo-cathode damage to be studied at multiple locations within the same MCP-PMT under different operating conditions. In this work, we use the pixel-based accelerated lifetime test to investigate the aging behavior of a 20 cm x 20 cm Large Area Picosecond Photo Detector (LAPPD) manufactured by INCOM Inc. at multiple locations within the same device under different operating conditions. We compare the aging behavior of the MCP-PMT obtained from the first lifetime test conducted under high gain conditions to the lifetime obtained at a different gain. Through this work, we aim to correlate the lifetime of the MCP-PMT and the rate of ion feedback, which is a function of the gain of each MCP, and which can also vary from point to point across a large area (400 $cm^2$) MCP. The tests were made possible by the uniqueness of the LAPPD design, which allows independent control of the gain of the chevron stacked MCPs. We will further discuss the implications of our results for optimizing the operating conditions of the detector when used in high event rate experiments.

Keywords: electron multipliers (vacuum), LAPPD, lifetime, micro-channel plate photo-multipliers tubes, photoemission, time-of-flight

Procedia PDF Downloads 170
5681 Multidimensional Inequality and Deprivation Among Tribal Communities of Andhra Pradesh, India

Authors: Sanjay Sinha, Mohd Umair Khan

Abstract:

The level of income inequality in India has been worrisome as the World Inequality Report termed it as a “poor and unequal country, with an affluent elite”. As important as income is to understand inequality and deprivation, it is just one dimension. But the historical roots and current realities of inequality and deprivation in India lies in many of the non-income dimensions such as housing, nutrition, education, agency, sense of inclusion etc. which are often ignored, especially in solution-oriented research. The level of inequality and deprivation among the tribal is one such case. There is a corpus of literature establishing that the tribal communities in India are disadvantageous on various grounds. Given their rural geography, issues of access and quality of basic facilities such as education and healthcare are often unaddressed. COVID-19 has further exacerbated this challenge and climate change will make it even more worrying. With this background, a succinct measurement tool at the village level is necessary to design short to medium-term actions with reference to risk mitigation for tribal communities. This research paper examines the level of inequality and deprivation among the tribal communities in the rural areas of Andhra Pradesh state of India using a Multidimensional Inequality and Deprivation Index based on the Alkire-Foster methodology. The methodology is theoretically grounded in the capability approach propounded by Amartya Sen, emphasizing on achieving the “beings and doings” (functionings) an individual reason to value. In the index, the authors have five domains, including Livelihood, Food Security, Education, Health and Housing and these domains are divided into sixteen indicators. This assessment is followed by domain-wise short-term and long-term solutions.

Keywords: Andhra Pradesh, Alkire-Foster methodology, deprivation, inequality, multidimensionality, poverty, tribal

Procedia PDF Downloads 152
5680 Effective Energy Saving of a Large Building through Multiple Approaches

Authors: Choo Hong Ang

Abstract:

The most popular approach to save energy for large commercial buildings in Malaysia is to replace the existing chiller plant of high kW/ton to one of lower kW/ton. This approach, however, entails large capital outlay with a long payment period of up to 7 years. This paper shows that by using multiple approaches, other than replacing the existing chiller plant, an energy saving of up to 20 %, is possible. The main methodology adopted was to identify and then plugged all heat ingress paths into a building, including putting up glass structures to prevent mixing of internal air-conditioned air with the ambient environment, and replacing air curtains with glass doors. This methodology could save up to 10 % energy bill. Another methodology was to change fixed speed motors of air handling units (AHU) to variable speed drive (VSD) and changing escalators to motion-sensor type. Other methodologies included reducing heat load by blocking air supply to non-occupied parcels, rescheduling chiller plant operation, changing of fluorescent lights to LED lights, and conversion from tariff B to C1. A case example of Komtar, the tallest building in Penang, is given here. The total energy bill for Komtar was USD2,303,341 in 2016 but was reduced to USD 1,842,927.39 in 2018, a significant saving of USD460,413.86 or 20 %. In terms of kWh, there was a reduction from 18, 302,204.00 kWh in 2016 to 14,877,105.00 kWh in 2018, a reduction of 3,425,099.00 kWh or 18.71 %. These methodologies used were relatively low cost and the payback period was merely 24 months. With this achievement, the Komtar building was awarded champion of the Malaysian National Energy Award 2019 and second runner up of the Asean Energy Award. This experience shows that a strong commitment to energy saving is the key to effective energy saving.

Keywords: chiller plant, energy saving measures, heat ingress, large building

Procedia PDF Downloads 99
5679 The Phenomena of False Cognates and Deceptive Cognates: Issues to Foreign Language Learning and Teaching Methodology Based on Set Theory

Authors: Marilei Amadeu Sabino

Abstract:

The aim of this study is to establish differences between the terms ‘false cognates’, ‘false friends’ and ‘deceptive cognates’, usually considered to be synonyms. It will be shown they are not synonyms, since they do not designate the same linguistic process or phenomenon. Despite their differences in meaning, many pairs of formally similar words in two (or more) different languages are true cognates, although they are usually known as ‘false’ cognates – such as, for instance, the English and Italian lexical items ‘assist x assistere’; ‘attend x attendere’; ‘argument x argomento’; ‘apology x apologia’; ‘camera x camera’; ‘cucumber x cocomero’; ‘fabric x fabbrica’; ‘factory x fattoria’; ‘firm x firma’; ‘journal x giornale’; ‘library x libreria’; ‘magazine x magazzino’; ‘parent x parente’; ‘preservative x preservativo’; ‘pretend x pretendere’; ‘vacancy x vacanza’, to name but a few examples. Thus, one of the theoretical objectives of this paper is firstly to elaborate definitions establishing a distinction between the words that are definitely ‘false cognates’ (derived from different etyma) and those that are just ‘deceptive cognates’ (derived from the same etymon). Secondly, based on Set Theory and on the concepts of equal sets, subsets, intersection of sets and disjoint sets, this study is intended to elaborate some theoretical and practical questions that will be useful in identifying more precisely similarities and differences between cognate words of different languages, and according to graphic interpretation of sets it will be possible to classify them and provide discernment about the processes of semantic changes. Therefore, these issues might be helpful not only to the Learning of Second and Foreign Languages, but they could also give insights into Foreign and Second Language Teaching Methodology. Acknowledgements: FAPESP – São Paulo State Research Support Foundation – the financial support offered (proc. n° 2017/02064-7).

Keywords: deceptive cognates, false cognates, foreign language learning, teaching methodology

Procedia PDF Downloads 336
5678 Mourning Motivations for Celebrities in Instagram: A Case Study of Mohammadreza Shajarian's Death

Authors: Zahra Afshordi

Abstract:

Instagram, as an everyday life social network, hosts from the ultrasound image of an unborn fetus to the pictures of newly placed gravestones and funerals. It is a platform that allows its users to create a second identity independently from and at the same time in relation to the real space identity. The motives behind this identification are what this article is about. This article studies the motivations of Instagram users mourning for celebrities with a focus on the death of MohammadReza Shajarian. The Shajarian’s death had a wide reflection on Instagram Persian-speaking users. The purpose of this qualitative survey is to comprehend and study the user’s motivations in posting mourning and memorializing content. The methodology of the essay is a hybrid methodology consisting of content analysis and open-ended interviews. The results highlight that users' motives are more than just simple sympathy and include political protest, gaining cultural capital, reaching social status, and escaping from solitude.

Keywords: case study, celebrity, identity, Instagram, mourning, qualitative survey

Procedia PDF Downloads 151
5677 Response of a Bridge Crane during an Earthquake

Authors: F. Fekak, A. Gravouil, M. Brun, B. Depale

Abstract:

During an earthquake, a bridge crane may be subjected to multiple impacts between crane wheels and rail. In order to model such phenomena, a time-history dynamic analysis with a multi-scale approach is performed. The high frequency aspect of the impacts between wheels and rails is taken into account by a Lagrange explicit event-capturing algorithm based on a velocity-impulse formulation to resolve contacts and impacts. An implicit temporal scheme is used for the rest of the structure. The numerical coupling between the implicit and the explicit schemes is achieved with a heterogeneous asynchronous time-integrator.

Keywords: bridge crane, earthquake, dynamic analysis, explicit, implicit, impact

Procedia PDF Downloads 299
5676 Selection of Pichia kudriavzevii Strain for the Production of Single-Cell Protein from Cassava Processing Waste

Authors: Phakamas Rachamontree, Theerawut Phusantisampan, Natthakorn Woravutthikul, Peerapong Pornwongthong, Malinee Sriariyanun

Abstract:

A total of 115 yeast strains isolated from local cassava processing wastes were measured for crude protein content. Among these strains, the strain MSY-2 possessed the highest protein concentration (>3.5 mg protein/mL). By using molecular identification tools, it was identified to be a strain of Pichia kudriavzevii based on similarity of D1/D2 domain of 26S rDNA region. In this study, to optimize the protein production by MSY-2 strain, Response Surface Methodology (RSM) was applied. The tested parameters were the carbon content, nitrogen content, and incubation time. Here, the value of regression coefficient (R2) = 0.7194 could be explained by the model, which is high to support the significance of the model. Under the optimal condition, the protein content was produced up to 3.77 g per L of the culture and MSY-2 strain contain 66.8 g protein per 100 g of cell dry weight. These results revealed the plausibility of applying the novel strain of yeast in single-cell protein production.

Keywords: single cell protein, response surface methodology, yeast, cassava processing waste

Procedia PDF Downloads 397
5675 Sponsorship Strategy, Its Visibility, and Return: A Case Study on Brazilian Olympic Games

Authors: Elizabeth F. Rodrigues, Julia da R. Mattos, Naira Q. Leitão, Roberta T. da Cunha

Abstract:

The business strategy of many companies has two factors in common: the search for the competitive edge and its long term maintenance. The thing that differentiates the companies’ performance in their abilities to set the right strategy, which depends on their capacity to analyze and apply all sort of management support tools. In this context, the sponsorship of events stands out as an important way to increase brand awareness, especially when it is a worldwide event, such as Rio 2016 Olympic and Paralympic Games. This paper will present the case of a car maker company, which chose to invest on sponsorship as a way to reach its goals and grow in the brazilian market.

Keywords: strategy, sponsorship, events, management

Procedia PDF Downloads 491
5674 Vibration Propagation in Structures Through Structural Intensity Analysis

Authors: Takhchi Jamal, Ouisse Morvan, Sadoulet-Reboul Emeline, Bouhaddi Noureddine, Gagliardini Laurent, Bornet Frederic, Lakrad Faouzi

Abstract:

Structural intensity is a technique that can be used to indicate both the magnitude and direction of power flow through a structure from the excitation source to the dissipation sink. However, current analysis is limited to the low frequency range. At medium and high frequencies, a rotational component appear in the field, masking the energy flow and make its understanding difficult or impossible. The objective of this work is to implement a methodology to filter out the rotational components of the structural intensity field in order to fully understand the energy flow in complex structures. The approach is based on the Helmholtz decomposition. It allows to decompose the structural intensity field into rotational, irrotational, and harmonic components. Only the irrotational component is needed to describe the net power flow from a source to a dissipative zone in the structure. The methodology has been applied on academic structures, and it allows a good analysis of the energy transfer paths.

Keywords: structural intensity, power flow, helmholt decomposition, irrotational intensity

Procedia PDF Downloads 175
5673 Adjusting Electricity Demand Data to Account for the Impact of Loadshedding in Forecasting Models

Authors: Migael van Zyl, Stefanie Visser, Awelani Phaswana

Abstract:

The electricity landscape in South Africa is characterized by frequent occurrences of loadshedding, a measure implemented by Eskom to manage electricity generation shortages by curtailing demand. Loadshedding, classified into stages ranging from 1 to 8 based on severity, involves the systematic rotation of power cuts across municipalities according to predefined schedules. However, this practice introduces distortions in recorded electricity demand, posing challenges to accurate forecasting essential for budgeting, network planning, and generation scheduling. Addressing this challenge requires the development of a methodology to quantify the impact of loadshedding and integrate it back into metered electricity demand data. Fortunately, comprehensive records of loadshedding impacts are maintained in a database, enabling the alignment of Loadshedding effects with hourly demand data. This adjustment ensures that forecasts accurately reflect true demand patterns, independent of loadshedding's influence, thereby enhancing the reliability of electricity supply management in South Africa. This paper presents a methodology for determining the hourly impact of load scheduling and subsequently adjusting historical demand data to account for it. Furthermore, two forecasting models are developed: one utilizing the original dataset and the other using the adjusted data. A comparative analysis is conducted to evaluate forecast accuracy improvements resulting from the adjustment process. By implementing this methodology, stakeholders can make more informed decisions regarding electricity infrastructure investments, resource allocation, and operational planning, contributing to the overall stability and efficiency of South Africa's electricity supply system.

Keywords: electricity demand forecasting, load shedding, demand side management, data science

Procedia PDF Downloads 56
5672 An Investigation on Material Removal Rate of EDM Process: A Response Surface Methodology Approach

Authors: Azhar Equbal, Anoop Kumar Sood, M. Asif Equbal, M. Israr Equbal

Abstract:

In the present work response surface methodology (RSM) based central composite design (CCD) is used for analyzing the electrical discharge machining (EDM) process. For experimentation, mild steel is selected as work piece and copper is used as electrode. Three machining parameters namely current (I), spark on time (Ton) and spark off time (Toff) are selected as the input variables. The output or response chosen is material removal rate (MRR) which is to be maximized. To reduce the number of runs face centered central composite design (FCCCD) was used. ANOVA was used to determine the significance of parameter and interactions. The suitability of model is tested using Anderson darling (AD) plot. The results conclude that different parameters considered i.e. current, pulse on and pulse off time; all have dominant effect on the MRR. At last, the optimized parameter setting for maximizing MRR is found through main effect plot analysis.

Keywords: EDM, electrode, MRR, RSM, ANOVA

Procedia PDF Downloads 300