Search results for: real time digital simulator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22675

Search results for: real time digital simulator

13465 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques

Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian

Abstract:

Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.

Keywords: data mining, k-means, road traffic accidents, Waze, Weka

Procedia PDF Downloads 405
13464 On Elastic Anisotropy of Fused Filament Fabricated Acrylonitrile Butadiene Styrene Structures

Authors: Joseph Marae Djouda, Ashraf Kasmi, François Hild

Abstract:

Fused filament fabrication is one of the most widespread additive manufacturing techniques because of its low-cost implementation. Its initial development was based on part fabrication with thermoplastic materials. The influence of the manufacturing parameters such as the filament orientation through the nozzle, the deposited layer thickness, or the speed deposition on the mechanical properties of the parts has been widely experimentally investigated. It has been recorded the remarkable variations of the anisotropy in the function of the filament path during the fabrication process. However, there is a lack in the development of constitutive models describing the mechanical properties. In this study, integrated digital image correlation (I-DIC) is used for the identification of mechanical constitutive parameters of two configurations of ABS samples: +/-45° and so-called “oriented deposition.” In this last, the filament was deposited in order to follow the principal strain of the sample. The identification scheme based on the gap reduction between simulation and the experiment directly from images recorded from a single sample (single edge notched tension specimen) is developed. The macroscopic and mesoscopic analysis are conducted from images recorded in both sample surfaces during the tensile test. The elastic and elastoplastic models in isotropic and orthotropic frameworks have been established. It appears that independently of the sample configurations (filament orientation during the fabrication), the elastoplastic isotropic model gives the correct description of the behavior of samples. It is worth noting that in this model, the number of constitutive parameters is limited to the one considered in the elastoplastic orthotropic model. This leads to the fact that the anisotropy of the architectured 3D printed ABS parts can be neglected in the establishment of the macroscopic behavior description.

Keywords: elastic anisotropy, fused filament fabrication, Acrylonitrile butadiene styrene, I-DIC identification

Procedia PDF Downloads 124
13463 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: injection molding, shrinkage, six sigma, Taguchi parameter design

Procedia PDF Downloads 174
13462 Ultrasonic Techniques to Characterize and Monitor Water-in-Oil Emulsion

Authors: E. A. Alshaafi, A. Prakash

Abstract:

Oil-water emulsions are commonly encountered in various industrial operations and at different stages of crude oil production and processing. Emulsions are often difficult to track and treat and can cause a number of costly problems which need to be avoided. The characteristics of the emulsion phase can vary with crude composition and types of impurities present in oil. The objectives of this study are the development of ultrasonic techniques to track and characterize emulsion phase generated during production and cleaning of crude oil. The position of emulsion layer is monitored with the help of ultrasonic probes suitably placed in the vessel. The sensitivity of the technique and its potential has been demonstrated based on extensive testing with different oil samples. The technique is also being developed to monitor emulsion phase characteristics such as stability, composition, and droplet size distribution. The ultrasonic parameters recorded are changes in acoustic velocity, signal attenuation and its frequency spectrum. Emulsion has been prepared with light mineral oil sample and the effects of various factors including mixing speed, temperature, surfactant, and solid particles concentrations have been investigated. The applied frequency for ultrasonic waves has been varied from 1 to 5 MHz to carry out a sensitivity analysis. Emulsion droplet structure is observed with optical microscopy and stability is examined by tracking the changes in ultrasonic parameters with time. A model based on ultrasonic attenuation spectroscopy is being developed and tested to track changes in droplet size distribution with time.

Keywords: ultrasonic techniques, emulsion, characterization, droplet size

Procedia PDF Downloads 170
13461 Teacher-Child Interactions within Learning Contexts in Prekindergarten

Authors: Angélique Laurent, Marie-Josée Letarte, Jean-Pascal Lemelin, Marie-France Morin

Abstract:

This study aims at exploring teacher-child interactions within learning contexts in public prekindergartens of the province of Québec (Canada). It is based on previous research showing that teacher-child interactions in preschools have direct and determining effects on the quality of early childhood education and could directly or indirectly influence child development. However, throughout a typical preschool day, children experience different learning contexts to promote their learning opportunities. Depending on these specific contexts, teacher-child interactions could vary, for example, between free play and shared book reading. Indeed, some studies have found that teacher-directed or child-directed contexts might lead to significant variations in teacher-child interactions. This study drew upon both the bioecological and the Teaching Through Interactions frameworks. It was conducted through a descriptive and correlational design. Fifteen teachers were recruited to participate in the study. At Time 1 in October, they completed a diary to report the learning contexts they proposed in their classroom during a typical week. At Time 2, seven months later (May), they were videotaped three times in the morning (two weeks’ time between each recording) during a typical morning class. The quality of teacher-child interactions was then coded with the Classroom Assessment Scoring System (CLASS) through the contexts identified. This tool measures three main domains of interactions: emotional support, classroom organization, and instruction support, and10 dimensions scored on a scale from 1 (low quality) to 7 (high quality). Based on the teachers’ reports, five learning contexts were identified: 1) shared book reading, 2) free play, 3) morning meeting, 4) teacher-directed activity (such as craft), and 5) snack. Based on preliminary statistical analyses, little variation was observed within the learning contexts for each domain of the CLASS. However, the instructional support domain showed lower scores during specific learning contexts, specifically free play and teacher-directed activity. Practical implications for how preschool teachers could foster specific domains of interactions depending on learning contexts to enhance children’s social and academic development will be discussed.

Keywords: teacher practices, teacher-child interactions, preschool education, learning contexts, child development

Procedia PDF Downloads 97
13460 An Historical Revision of Change and Configuration Management Process

Authors: Expedito Pinto De Paula Junior

Abstract:

Current systems such as artificial satellites, airplanes, automobiles, turbines, power systems and air traffic controls are becoming increasingly more complex and/or highly integrated as defined in SAE-ARP-4754A (Society Automotive Engineering - Certification considerations for highly-integrated or complex aircraft systems standard). Among other processes, the development of such systems requires careful Change and Configuration Management (CCM) to establish and maintain product integrity. Understand the maturity of CCM process based in historical approach is crucial for better implementation in hardware and software lifecycle. The sense of work organization, in all fields of development is directly related to the order and interrelation of the parties, changes in time, and record of these changes. Generally, is observed that engineers, administrators and managers invest more time in technical activities than in organization of work. More these professionals are focused in solving complex problems with a purely technical bias. CCM process is fundamental for development, production and operation of new products specially in the safety critical systems. The objective of this paper is open a discussion about the historical revision based in standards focus of CCM around the world in order to understand and reflect the importance across the years, the contribution of this process for technology evolution, to understand the mature of organizations in the system lifecycle project and the benefits of CCM to avoid errors and mistakes during the Lifecycle Product.

Keywords: changes, configuration management, historical, revision

Procedia PDF Downloads 195
13459 3D Steady and Transient Centrifugal Pump Flow within Ansys CFX and OpenFOAM

Authors: Clement Leroy, Guillaume Boitel

Abstract:

This paper presents a comparative benchmarking review of a steady and transient three-dimensional (3D) flow computations in centrifugal pump using commercial (AnsysCFX) and open source (OpenFOAM) computational fluid dynamics (CFD) software. In centrifugal rotor-dynamic pump, the fluid enters in the impeller along to the rotating axis to be accelerated in order to increase the pressure, flowing radially outward into another stage, vaned diffuser or volute casing, from where it finally exits into a downstream pipe. Simulations are carried out at the best efficiency point (BEP) and part load, for single-phase flow with several turbulence models. The results are compared with overall performance report from experimental data. The use of CFD technology in industry is still limited by the high computational costs, and even more by the high cost of commercial CFD software and high-performance computing (HPC) licenses. The main objectives of the present study are to define OpenFOAM methodology for high-quality 3D steady and transient turbomachinery CFD simulation to conduct a thorough time-accurate performance analysis. On the other hand a detailed comparisons between computational methods, features on latest Ansys release 18 and OpenFOAM is investigated to assess the accuracy and industrial applications of those solvers. Finally an automated connected workflow (IoT) for turbine blade applications is presented.

Keywords: benchmarking, CFX, internet of things, openFOAM, time-accurate, turbomachinery

Procedia PDF Downloads 201
13458 Poly(Trimethylene Carbonate)/Poly(ε-Caprolactone) Phase-Separated Triblock Copolymers with Advanced Properties

Authors: Nikola Toshikj, Michel Ramonda, Sylvain Catrouillet, Jean-Jacques Robin, Sebastien Blanquer

Abstract:

Biodegradable and biocompatible block copolymers have risen as the golden materials in both medical and environmental applications. Moreover, if their architecture is of controlled manner, higher applications can be foreseen. In the meantime, organocatalytic ROP has been promoted as more rapid and immaculate route, compared to the traditional organometallic catalysis, towards efficient synthesis of block copolymer architectures. Therefore, herein we report novel organocatalytic pathway with guanidine molecules (TBD) for supported synthesis of trimethylene carbonate initiated by poly(caprolactone) as pre-polymer. Pristine PTMC-b-PCL-b-PTMC block copolymer structure, without any residual products and clear desired block proportions, was achieved under 1.5 hours at room temperature and verified by NMR spectroscopies and size-exclusion chromatography. Besides, when elaborating block copolymer films, further stability and amelioration of mechanical properties can be achieved via additional reticulation step of precedently methacrylated block copolymers. Subsequently, stimulated by the insufficient studies on the phase-separation/crystallinity relationship in these semi-crystalline block copolymer systems, their intrinsic thermal and morphology properties were investigated by differential scanning calorimetry and atomic force microscopy. Firstly, by DSC measurements, the block copolymers with χABN values superior to 20 presented two distinct glass transition temperatures, close to the ones of the respecting homopolymers, demonstrating an initial indication of a phase-separated system. In the interim, the existence of the crystalline phase was supported by the presence of melting temperature. As expected, the crystallinity driven phase-separated morphology predominated in the AFM analysis of the block copolymers. Neither crosslinking at melted state, hence creation of a dense polymer network, disturbed the crystallinity phenomena. However, the later revealed as sensible to rapid liquid nitrogen quenching directly from the melted state. Therefore, AFM analysis of liquid nitrogen quenched and crosslinked block copolymer films demonstrated a thermodynamically driven phase-separation clearly predominating over the originally crystalline one. These AFM films remained stable with their morphology unchanged even after 4 months at room temperature. However, as demonstrated by DSC analysis once rising the temperature above the melting temperature of the PCL block, neither the crosslinking nor the liquid nitrogen quenching shattered the semi-crystalline network, while the access to thermodynamical phase-separated structures was possible for temperatures under the poly (caprolactone) melting point. Precisely this coexistence of dual crosslinked/crystalline networks in the same copolymer structure allowed us to establish, for the first time, the shape-memory properties in such materials, as verified by thermomechanical analysis. Moreover, the response temperature to the material original shape depended on the block copolymer emplacement, hence PTMC or PCL as end-block. Therefore, it has been possible to reach a block copolymer with transition temperature around 40°C thus opening potential real-life medical applications. In conclusion, the initial study of phase-separation/crystallinity relationship in PTMC-b-PCL-b-PTMC block copolymers lead to the discovery of novel shape memory materials with superior properties, widely demanded in modern-life applications.

Keywords: biodegradable block copolymers, organocatalytic ROP, self-assembly, shape-memory

Procedia PDF Downloads 125
13457 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Iran: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: Crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Iran using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in VECM suggests that all energy consumption variables in this study have significant impacts on GDP in the long term. The consumption of petroleum products and the direct combustion of crude oil and natural gas decrease GDP, while the coal and electricity use enhanced the GDP between 1980-2010 in Iran. In the short term, only electricity use enhances the GDP as well as its long-run effects. All variables of this study, except the CO2 emissions, show significant effects on the GDP in the country for the long term. The long-run equilibrium in VECM suggests that the consumption of petroleum products and the direct combustion of crude oil and natural gas use have positive impacts on the GDP while the consumptions of electricity and coal have adverse impacts on the GDP in the long term. In the short run, electricity use enhances the GDP over period of 1980-2010 in Iran. Overall, the results partly support arguments that there are relationships between energy use and economic output, but the associations can be differed by the sources of energy in the case of Iran over period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.

Keywords: CO2 emissions, energy consumption, GDP, Iran, time series analysis

Procedia PDF Downloads 590
13456 Understanding the Operational Challenges of Social Enterprises: A Review of Real-Life Issues in the Context of Developing Countries

Authors: Humayun Murshed

Abstract:

There is growing importance of ‘social enterprise’ among the researchers and policy makers around the globe. Such enterprises have been viewed as alternative means for addressing the concerns relating to financing of corporate enterprises and social empowerment. This, some cases, has led to relatively unrealistic and higher level of expectations among policy makers and the members of the society at large. There is a general perception among different social actors that these enterprises provide universal and magic solution towards employment generation, and thus resulting in eradicating poverty, and ensuring equitable distribution of income and wealth. However, in many cases, these enterprises find a challenging journey in terms of prevailing market structure, socio-political environment, and unrealistic perception and expectations of social participants. This paper is focused on reviewing case studies based on empirical research and information from secondary sources and geared to looking at the challenges that social enterprises face. The research will draw the experience primarily from the developing countries’ perspective by adopting case study methodology. A tentative action plan will be suggested for further review by the policy makers and researchers in this growing arena of discipline. This research will attempt to highlight the myths and realities surrounding the operation of social enterprises.

Keywords: social enterprises, social empowerment, economic development, financing need

Procedia PDF Downloads 178
13455 Two-Way Reminder Systems to Support Activities of Daily Living for Adults with Cognitive Impairments: A Scoping Review

Authors: Julia Brudzinski, Ashley Croswell, Jade Mardin, Hannah Shilling, Jennifer Berg-Carnegie

Abstract:

Adults with brain injuries and mental illnesses commonly experience cognitive impairments that interfere with their participation in activities of daily living (ADLs). Prior research states that electronic reminder systems can support adults with cognitive impairments; however, previous studies focus primarily on one-way reminder systems. Research on adults with chronic diseases reported that two-way reminder systems yield better health outcomes and disease self-management compared to one-way reminder systems. Literature was identified through systematically searching 7 databases and hand-searching relevant reference lists. Retrieved studies were independently screened and reviewed by at least two members of the research team. Data was extracted on study design, participant characteristics, intervention details, study objectives, outcome measures, and important results. 574 articles were screened and reviewed. Nine articles met all inclusion criteria and were included. The literature focused on three main areas: system feasibility (n=8), stakeholder satisfaction (n=6), and efficacy of the two-way reminder systems (n=6). Participants in eight of the studies had brain injuries, with participants in only one study having a mental illness (i.e., schizophrenia). Two-way reminder systems were used to support participation in a wide range of ADLs. The current literature on two-way reminder systems to support ADLs for adults with cognitive impairments focuses on feasibility, stakeholder satisfaction, and system efficacy. Future research should focus on addressing the barriers to accessing and implementing two-way reminder systems and identifying specific client characteristics that would benefit most from using these systems.

Keywords: brain injury, digital health, occupational therapy, activities of daily living, two-way reminder systems

Procedia PDF Downloads 71
13454 Enhancing the Implementation Strategy of Simultaneous Operations (SIMOPS) for the Major Turnaround at Pertamina Plaju Refinery

Authors: Fahrur Rozi, Daniswara Krisna Prabatha, Latief Zulfikar Chusaini

Abstract:

Amidst the backdrop of Pertamina Plaju Refinery, which stands as the oldest and historically less technologically advanced among Pertamina's refineries, lies a unique challenge. Originally integrating facilities established by Shell in 1904 and Stanvac (originally Standard Oil) in 1926, the primary challenge at Plaju Refinery does not solely revolve around complexity; instead, it lies in ensuring reliability, considering its operational history of over a century. After centuries of existence, Plaju Refinery has never undergone a comprehensive major turnaround encompassing all its units. The usual practice involves partial turnarounds that are sequentially conducted across its primary, secondary, and tertiary units (utilities and offsite). However, a significant shift is on the horizon. In the Q-IV of 2023, the refinery embarks on its first-ever major turnaround since its establishment. This decision was driven by the alignment of maintenance timelines across various units. Plaju Refinery's major turnaround was scheduled for October-November 2023, spanning 45 calendar days, with the objective of enhancing the operational reliability of all refinery units. The extensive job list for this turnaround encompasses 1583 tasks across 18 units/areas, involving approximately 9000 contracted workers. In this context, the Strategy of Simultaneous Operations (SIMOPS) execution emerges as a pivotal tool to optimize time efficiency and ensure safety. A Hazard Effect Management Process (HEMP) has been employed to assess the risk ratings of each task within the turnaround. Out of the tasks assessed, 22 are deemed high-risk and necessitate mitigation. The SIMOPS approach serves as a preventive measure against potential incidents. It is noteworthy that every turnaround period at Pertamina Plaju Refinery involves SIMOPS-related tasks. In this context, enhancing the implementation strategy of "Simultaneous Operations (SIMOPS)" becomes imperative to minimize the occurrence of incidents. At least four improvements have been introduced in the enhancement process for the major turnaround at Refinery Plaju. The first improvement involves conducting systematic risk assessment and potential hazard mitigation studies for SIMOPS tasks before task execution, as opposed to the previous on-site approach. The second improvement includes the completion of SIMOPS Job Mitigation and Work Matrices Sheets, which was often neglected in the past. The third improvement emphasizes comprehensive awareness to workers/contractors regarding potential hazards and mitigation strategies for SIMOPS tasks before and during the major turnaround. The final improvement is the introduction of a daily program for inspecting and observing work in progress for SIMOPS tasks. Prior to these improvements, there was no established program for monitoring ongoing activities related to SIMOPS tasks during the turnaround. This study elucidates the steps taken to enhance SIMOPS within Pertamina, drawing from the experiences of Plaju Refinery as a guide. A real actual case study will be provided from our experience in the operational unit. In conclusion, these efforts are essential for the success of the first-ever major turnaround at Plaju Refinery, with the SIMOPS strategy serving as a central component. Based on these experiences, enhancements have been made to Pertamina's official Internal Guidelines for Executing SIMOPS Risk Mitigation, benefiting all Pertamina units.

Keywords: process safety management, turn around, oil refinery, risk assessment

Procedia PDF Downloads 64
13453 The Effects of Different Types of Cement on the Permeability of Deep Mixing Columns

Authors: Mojebullah Wahidy, Murat Olgun

Abstract:

In this study, four different types of cement are used to investigate the permeability of DMC (Deep Mixing Column) in the clay. The clay used in this research is in the kaolin group, and the types of cement are; CEM I 42.5.R. normal portland cement, CEM II/A-M (P-L) pozzolan doped cement, CEM III/A 42.5 N blast furnace slag cement and DMFC-800 fine-grained portland cement. Firstly, some rheological tests are done on every cement, and a 0.9 water/cement ratio is selected as the appropriate ratio. This ratio is used to prepare the small-scale DMCs for all types of cement with %6, %9, %12, and %15, which are determined as the dry weight of the clay. For all the types of cement, three samples were prepared in every percentage and were kept on curing for 7, 14, and 28 days for permeability tests. As a result of the small-scale DMCs, permeability tests, a %12 selected for big-scale DMCs. A total of five big scales DMC were prepared by using a %12-cement and were kept for 28 days curing for permeability tests. The results of the permeability tests show that by increasing the cement percentage and curing time of all DMCs, the permeability coefficient (k) is decreased. Despite variable results in different cement ratios and curing time in general, samples treated by DMFC-800 fine-grained cement have the lowest permeability coefficient. Samples treated with CEM II and CEM I cement types were the second and third lowest permeable samples. The highest permeability coefficient belongs to the samples that were treated with CEM III cement type.

Keywords: deep mixing column, rheological test, DMFC-800, permeability test

Procedia PDF Downloads 67
13452 Design of Aesthetic Acoustic Metamaterials Window Panel Based on Sierpiński Fractal Triangle for Sound-silencing with Free Airflow

Authors: Sanjeet Kumar Singh, Shanatanu Bhattacharaya

Abstract:

Design of high- efficiency low, frequency (<1000Hz) soundproof window or wall absorber which is transparent to airflow is presented. Due to the massive rise in human population and modernization, environmental noise has significantly risen globally. Prolonged noise exposure can cause severe physiological and psychological symptoms like nausea, headaches, fatigue, and insomnia. There has been continuous growth in building construction and infrastructure like offices, bus stops, and airports due to urban population. Generally, a ventilated window is used for getting fresh air into the room, but at the same time, unwanted noise comes along. Researchers used traditional approaches like noise barrier mats in front of the window or designed the entire window using sound-absorbing materials. However, this solution is not aesthetically pleasing, and at the same time, it's heavy and not adequate for low-frequency noise shielding. To address this challenge, we design a transparent hexagonal panel based on Sierpiński fractal triangle, which is aesthetically pleasing, demonstrates normal incident sound absorption coefficient more than 0.96 around 700 Hz and transmission loss around 23 dB while maintaining e air circulation through triangular cutout. Next, we present a concept of fabrication of large acoustic panel for large-scale applications, which lead to suppressing the urban noise pollution.

Keywords: acoustic metamaterials, noise, functional materials, ventilated

Procedia PDF Downloads 72
13451 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence

Authors: Nasser Salah Eldin Mohammed Salih Shebka

Abstract:

Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.

Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic

Procedia PDF Downloads 110
13450 Received Signal Strength Indicator Based Localization of Bluetooth Devices Using Trilateration: An Improved Method for the Visually Impaired People

Authors: Muhammad Irfan Aziz, Thomas Owens, Uzair Khaleeq uz Zaman

Abstract:

The instantaneous and spatial localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles, is the most demanding and challenging issue faced by the navigation systems today. Since Bluetooth cannot utilize techniques like Time Difference of Arrival (TDOA) and Time of Arrival (TOA), it uses received signal strength indicator (RSSI) to measure Receive Signal Strength (RSS). The measurements using RSSI can be improved significantly by improving the existing methodologies related to RSSI. Therefore, the current paper focuses on proposing an improved method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the method, class 2 Bluetooth devices were used along with the development of a software. Experiments were then conducted to obtain surface plots that showed the signal interferences and other environmental effects. Finally, the results obtained show the surface plots for all Bluetooth modules used along with the strong and weak points depicted as per the color codes in red, yellow and blue. It was concluded that the suggested improved method of measuring RSS using trilateration helped to not only measure signal strength affectively but also highlighted how the signal strength can be influenced by atmospheric conditions such as noise, reflections, etc.

Keywords: Bluetooth, indoor/outdoor localization, received signal strength indicator, visually impaired

Procedia PDF Downloads 129
13449 Chemical and Physical Properties and Biocompatibility of Ti–6Al–4V Produced by Electron Beam Rapid Manufacturing and Selective Laser Melting for Biomedical Applications

Authors: Bing–Jing Zhao, Chang-Kui Liu, Hong Wang, Min Hu

Abstract:

Electron beam rapid manufacturing (EBRM) or Selective laser melting is an additive manufacturing process that uses 3D CAD data as a digital information source and energy in the form of a high-power laser beam or electron beam to create three-dimensional metal parts by fusing fine metallic powders together.Object:The present study was conducted to evaluate the mechanical properties ,the phase transformation,the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM,SLM and forging technique.Method: Ti-6Al-4V alloy standard test pieces were manufactured by EBRM, SLM and forging technique according to AMS4999,GB/T228 and ISO 10993.The mechanical properties were analyzed by universal test machine. The phase transformation was analyzed by X-ray diffraction and scanning electron microscopy. The corrosivity was analyzed by electrochemical method. The biocompatibility was analyzed by co-culturing with mesenchymal stem cell and analyzed by scanning electron microscopy (SEM) and alkaline phosphatase assay (ALP) to evaluate cell adhesion and differentiation, respectively. Results: The mechanical properties, the phase transformation, the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM、SLM were similar to forging and meet the mechanical property requirements of AMS4999 standard. a­phase microstructure for the EBM production contrast to the a’­phase microstructure of the SLM product. Mesenchymal stem cell adhesion and differentiation were well. Conclusion: The property of the Ti-6Al-4V alloy manufactured by EBRM and SLM technique can meet the medical standard from this study. But some further study should be proceeded in order to applying well in clinical practice.

Keywords: 3D printing, Electron Beam Rapid Manufacturing (EBRM), Selective Laser Melting (SLM), Computer Aided Design (CAD)

Procedia PDF Downloads 452
13448 Jejunostomy and Protective Ileostomy in a Patient with Massive Necrotizing Enterocolitis: A Case Report

Authors: Rafael Ricieri, Rogerio Barros

Abstract:

Objective: This study is to report a case of massive necrotizing enterocolitis in a six-month-old patient, requiring ileostomy and protective jejunostomy as a damage control measure in the first exploratory laparotomy surgery in massive enterocolitis without a previous diagnosis. Methods: This study is a case report of success in making and closing a protective jejunostomy. However, the low number of publications on this staged and risky measure of surgical resolution encouraged the team to study the indication and especially the correct time for closing the patient's protective jejunostomy. The main study instrument will be the six-month-old patient's medical record. Results: Based on the observation of the case described, it was observed that the time for the closure of the described procedure (protective jejunostomy) varies according to the level of compromise of the health status of your patient and of an individual of each person. Early closure, or failure to close, can lead to a favorable problem for the patient since several problems can result from this closure, such as new intestinal perforations, hydroelectrolyte disturbances. Despite the risk of new perforations, we suggest closing the protective jejunostomy around the 14th day of the procedure, thus keeping the patient on broad-spectrum antibiotic therapy and absolute fasting, thus reducing the chances of new intestinal perforations. Associated with the closure of the jejunostomy, a gastric tube for decompression is necessary, and care in an intensive care unit and electrolyte replacement is necessary to maintain the stability of the case.

Keywords: jejunostomy, ileostomy, enterocolitis, pediatric surgery, gastric surgery

Procedia PDF Downloads 79
13447 Strategies for Good Governance during Crisis in Higher Education

Authors: Naziema B. Jappie

Abstract:

Over the last 23 years leaders in government, political parties and universities have been spending much time on identifying and discussing various gaps in the system that impact systematically on students especially those from historically Black communities. Equity and access to higher education were two critical aspects that featured in achieving the transformation goals together with a funding model for those previously disadvantaged. Free education was not a feasible option for the government. Institutional leaders in higher education face many demands on their time and resources. Often, the time for crisis management planning or consideration of being proactive and preventative is not a standing agenda item. With many issues being priority in academia, people become complacent and think that crisis may not affect them or they will cross the bridge when they get to it. Historically South Africa has proven to be a country of militancy, strikes and protests in most industries, some leading to disastrous outcomes. Higher education was not different between October 2015 and late 2016 when the #Rhodes Must Fall which morphed into the # Fees Must Fall protest challenged the establishment, changed the social fabric of universities, bringing the sector to a standstill. Some institutional leaders and administrators were better at handling unexpected, high-consequence situations than others. At most crisis leadership is viewed as a situation more than a style of leadership which is usually characterized by crisis management. The objective of this paper is to show how institutions managed catastrophes of disastrous proportions, down through unexpected incidents of 2015/2016. The content draws on the vast past crisis management experience of the presenter and includes the occurrences of the recent protests giving an event timeline. Using responses from interviews with institutional leaders and administrators as well as students will ensure first-hand information on their experiences and the outcomes. Students have tasted the power of organized action and they demand immediate change, if not the revolt will continue. This paper will examine the approaches that guided institutional leaders and their crisis teams and sector crisis response. It will further expand on whether the solutions effectively changed governance in higher education or has it minimized the need for more protests. The conclusion will give an insight into the future of higher education in South Africa from a leadership perspective.

Keywords: crisis, governance, intervention, leadership, strategies, protests

Procedia PDF Downloads 144
13446 Information Security Risk Management in IT-Based Process Virtualization: A Methodological Design Based on Action Research

Authors: Jefferson Camacho Mejía, Jenny Paola Forero Pachón, Luis Carlos Gómez Flórez

Abstract:

Action research is a qualitative research methodology, which leads the researcher to delve into the problems of a community in order to understand its needs in depth and finally, to propose actions that lead to a change of social paradigm. Although this methodology had its beginnings in the human sciences, it has attracted increasing interest and acceptance in the field of information systems research since the 1990s. The countless possibilities offered nowadays by the use of Information Technologies (IT) in the development of different socio-economic activities have meant a change of social paradigm and the emergence of the so-called information and knowledge society. According to this, governments, large corporations, small entrepreneurs and in general, organizations of all kinds are using IT to virtualize their processes, taking them from the physical environment to the digital environment. However, there is a potential risk for organizations related with exposing valuable information without an appropriate framework for protecting it. This paper shows progress in the development of a methodological design to manage the information security risks associated with the IT-based processes virtualization, by applying the principles of the action research methodology and it is the result of a systematic review of the scientific literature. This design consists of seven fundamental stages. These are distributed in the three stages described in the action research methodology: 1) Observe, 2) Analyze and 3) Take actions. Finally, this paper aims to offer an alternative tool to traditional information security management methodologies with a view to being applied specifically in the planning stage of IT-based process virtualization in order to foresee risks and to establish security controls before formulating IT solutions in any type of organization.

Keywords: action research, information security, information technology, methodological design, process virtualization, risk management

Procedia PDF Downloads 161
13445 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: stacking, multi-layers, ensemble, multi-class

Procedia PDF Downloads 263
13444 Thermal-Mechanical Analysis of a Bridge Deck to Determine Residual Weld Stresses

Authors: Evy Van Puymbroeck, Wim Nagy, Ken Schotte, Heng Fang, Hans De Backer

Abstract:

The knowledge of residual stresses for welded bridge components is essential to determine the effect of the residual stresses on the fatigue life behavior. The residual stresses of an orthotropic bridge deck are determined by simulating the welding process with finite element modelling. The stiffener is placed on top of the deck plate before welding. A chained thermal-mechanical analysis is set up to determine the distribution of residual stresses for the bridge deck. First, a thermal analysis is used to determine the temperatures of the orthotropic deck for different time steps during the welding process. Twin wire submerged arc welding is used to construct the orthotropic plate. A double ellipsoidal volume heat source model is used to describe the heat flow through a material for a moving heat source. The heat input is used to determine the heat flux which is applied as a thermal load during the thermal analysis. The heat flux for each element is calculated for different time steps to simulate the passage of the welding torch with the considered welding speed. This results in a time dependent heat flux that is applied as a thermal loading. Thermal material behavior is specified by assigning the properties of the material in function of the high temperatures during welding. Isotropic hardening behavior is included in the model. The thermal analysis simulates the heat introduced in the two plates of the orthotropic deck and calculates the temperatures during the welding process. After the calculation of the temperatures introduced during the welding process in the thermal analysis, a subsequent mechanical analysis is performed. For the boundary conditions of the mechanical analysis, the actual welding conditions are considered. Before welding, the stiffener is connected to the deck plate by using tack welds. These tack welds are implemented in the model. The deck plate is allowed to expand freely in an upwards direction while it rests on a firm and flat surface. This behavior is modelled by using grounded springs. Furthermore, symmetry points and lines are used to prevent the model to move freely in other directions. In the thermal analysis, a mechanical material model is used. The calculated temperatures during the thermal analysis are introduced during the mechanical analysis as a time dependent load. The connection of the elements of the two plates in the fusion zone is realized with a glued connection which is activated when the welding temperature is reached. The mechanical analysis results in a distribution of the residual stresses. The distribution of the residual stresses of the orthotropic bridge deck is compared with results from literature. Literature proposes uniform tensile yield stresses in the weld while the finite element modelling showed tensile yield stresses at a short distance from the weld root or the weld toe. The chained thermal-mechanical analysis results in a distribution of residual weld stresses for an orthotropic bridge deck. In future research, the effect of these residual stresses on the fatigue life behavior of welded bridge components can be studied.

Keywords: finite element modelling, residual stresses, thermal-mechanical analysis, welding simulation

Procedia PDF Downloads 169
13443 Arbitrarily Shaped Blur Kernel Estimation for Single Image Blind Deblurring

Authors: Aftab Khan, Ashfaq Khan

Abstract:

The research paper focuses on an interesting challenge faced in Blind Image Deblurring (BID). It relates to the estimation of arbitrarily shaped or non-parametric Point Spread Functions (PSFs) of motion blur caused by camera handshake. These PSFs exhibit much more complex shapes than their parametric counterparts and deblurring in this case requires intricate ways to estimate the blur and effectively remove it. This research work introduces a novel blind deblurring scheme visualized for deblurring images corrupted by arbitrarily shaped PSFs. It is based on Genetic Algorithm (GA) and utilises the Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE) measure as the fitness function for arbitrarily shaped PSF estimation. The proposed BID scheme has been compared with other single image motion deblurring schemes as benchmark. Validation has been carried out on various blurred images. Results of both benchmark and real images are presented. Non-reference image quality measures were used to quantify the deblurring results. For benchmark images, the proposed BID scheme using BRISQUE converges in close vicinity of the original blurring functions.

Keywords: blind deconvolution, blind image deblurring, genetic algorithm, image restoration, image quality measures

Procedia PDF Downloads 440
13442 Accelerated Molecular Simulation: A Convolution Approach

Authors: Jannes Quer, Amir Niknejad, Marcus Weber

Abstract:

Computational Drug Design is often based on Molecular Dynamics simulations of molecular systems. Molecular Dynamics can be used to simulate, e.g., the binding and unbinding event of a small drug-like molecule with regard to the active site of an enzyme or a receptor. However, the time-scale of the overall binding event is many orders of magnitude longer than the time-scale of simulation. Thus, there is a need to speed-up molecular simulations. In order to speed up simulations, the molecular dynamics trajectories have to be ”steared” out of local minimizers of the potential energy surface – the so-called metastabilities – of the molecular system. Increasing the kinetic energy (temperature) is one possibility to accelerate simulated processes. However, with temperature the entropy of the molecular system increases, too. But this kind ”stearing” is not directed enough to stear the molecule out of the minimum toward the saddle point. In this article, we give a new mathematical idea, how a potential energy surface can be changed in such a way, that entropy is kept under control while the trajectories are still steared out of the metastabilities. In order to compute the unsteared transition behaviour based on a steared simulation, we propose to use extrapolation methods. In the end we mathematically show, that our method accelerates the simulations along the direction, in which the curvature of the potential energy surface changes the most, i.e., from local minimizers towards saddle points.

Keywords: extrapolation, Eyring-Kramers, metastability, multilevel sampling

Procedia PDF Downloads 323
13441 Normalizing Scientometric Indicators of Individual Publications Using Local Cluster Detection Methods on Citation Networks

Authors: Levente Varga, Dávid Deritei, Mária Ercsey-Ravasz, Răzvan Florian, Zsolt I. Lázár, István Papp, Ferenc Járai-Szabó

Abstract:

One of the major shortcomings of widely used scientometric indicators is that different disciplines cannot be compared with each other. The issue of cross-disciplinary normalization has been long discussed, but even the classification of publications into scientific domains poses problems. Structural properties of citation networks offer new possibilities, however, the large size and constant growth of these networks asks for precaution. Here we present a new tool that in order to perform cross-field normalization of scientometric indicators of individual publications relays on the structural properties of citation networks. Due to the large size of the networks, a systematic procedure for identifying scientific domains based on a local community detection algorithm is proposed. The algorithm is tested with different benchmark and real-world networks. Then, by the use of this algorithm, the mechanism of the scientometric indicator normalization process is shown for a few indicators like the citation number, P-index and a local version of the PageRank indicator. The fat-tail trend of the article indicator distribution enables us to successfully perform the indicator normalization process.

Keywords: citation networks, cross-field normalization, local cluster detection, scientometric indicators

Procedia PDF Downloads 197
13440 Energy Management Method in DC Microgrid Based on the Equivalent Hydrogen Consumption Minimum Strategy

Authors: Ying Han, Weirong Chen, Qi Li

Abstract:

An energy management method based on equivalent hydrogen consumption minimum strategy is proposed in this paper aiming at the direct-current (DC) microgrid consisting of photovoltaic cells, fuel cells, energy storage devices, converters and DC loads. The rational allocation of fuel cells and battery devices is achieved by adopting equivalent minimum hydrogen consumption strategy with the full use of power generated by photovoltaic cells. Considering the balance of the battery’s state of charge (SOC), the optimal power of the battery under different SOC conditions is obtained and the reference output power of the fuel cell is calculated. And then a droop control method based on time-varying droop coefficient is proposed to realize the automatic charge and discharge control of the battery, balance the system power and maintain the bus voltage. The proposed control strategy is verified by RT-LAB hardware-in-the-loop simulation platform. The simulation results show that the designed control algorithm can realize the rational allocation of DC micro-grid energy and improve the stability of system.

Keywords: DC microgrid, equivalent minimum hydrogen consumption strategy, energy management, time-varying droop coefficient, droop control

Procedia PDF Downloads 299
13439 Computational Aided Approach for Strut and Tie Model for Non-Flexural Elements

Authors: Mihaja Razafimbelo, Guillaume Herve-Secourgeon, Fabrice Gatuingt, Marina Bottoni, Tulio Honorio-De-Faria

Abstract:

The challenge of the research is to provide engineering with a robust, semi-automatic method for calculating optimal reinforcement for massive structural elements. In the absence of such a digital post-processing tool, design office engineers make intensive use of plate modelling, for which automatic post-processing is available. Plate models in massive areas, on the other hand, produce conservative results. In addition, the theoretical foundations of automatic post-processing tools for reinforcement are those of reinforced concrete beam sections. As long as there is no suitable alternative for automatic post-processing of plates, optimal modelling and a significant improvement of the constructability of massive areas cannot be expected. A method called strut-and-tie is commonly used in civil engineering, but the result itself remains very subjective to the calculation engineer. The tool developed will facilitate the work of supporting the engineers in their choice of structure. The method implemented consists of defining a ground-structure built on the basis of the main constraints resulting from an elastic analysis of the structure and then to start an optimization of this structure according to the fully stressed design method. The first results allow to obtain a coherent return in the first network of connecting struts and ties, compared to the cases encountered in the literature. The evolution of the tool will then make it possible to adapt the obtained latticework in relation to the cracking states resulting from the loads applied during the life of the structure, cyclic or dynamic loads. In addition, with the constructability constraint, a final result of reinforcement with an orthogonal arrangement with a regulated spacing will be implemented in the tool.

Keywords: strut and tie, optimization, reinforcement, massive structure

Procedia PDF Downloads 139
13438 A Mathematical Analysis of a Model in Capillary Formation: The Roles of Endothelial, Pericyte and Macrophages in the Initiation of Angiogenesis

Authors: Serdal Pamuk, Irem Cay

Abstract:

Our model is based on the theory of reinforced random walks coupled with Michealis-Menten mechanisms which view endothelial cell receptors as the catalysts for transforming both tumor and macrophage derived tumor angiogenesis factor (TAF) into proteolytic enzyme which in turn degrade the basal lamina. The model consists of two main parts. First part has seven differential equations (DE’s) in one space dimension over the capillary, whereas the second part has the same number of DE’s in two space dimensions in the extra cellular matrix (ECM). We connect these two parts via some boundary conditions to move the cells into the ECM in order to initiate capillary formation. But, when does this movement begin? To address this question we estimate the thresholds that activate the transport equations in the capillary. We do this by using steady-state analysis of TAF equation under some assumptions. Once these equations are activated endothelial, pericyte and macrophage cells begin to move into the ECM for the initiation of angiogenesis. We do believe that our results play an important role for the mechanisms of cell migration which are crucial for tumor angiogenesis. Furthermore, we estimate the long time tendency of these three cells, and find that they tend to the transition probability functions as time evolves. We provide our numerical solutions which are in good agreement with our theoretical results.

Keywords: angiogenesis, capillary formation, mathematical analysis, steady-state, transition probability function

Procedia PDF Downloads 153
13437 Land Suitability Assessment for Vineyards in Afghanistan Based on Physical and Socio-Economic Criteria

Authors: Sara Tokhi Arab, Tariq Salari, Ryozo Noguchi, Tofael Ahamed

Abstract:

Land suitability analysis is essential for table grape cultivation in order to increase its production and productivity under the dry condition of Afghanistan. In this context, the main aim of this paper was to determine the suitable locations for vineyards based on satellite remote sensing and GIS (geographical information system) in Kabul Province of Afghanistan. The Landsat8 OLI (operational land imager) and thermal infrared sensor (TIRS) and shuttle radar topography mission digital elevation model (SRTM DEM) images were processed to obtain the normalized difference vegetation index (NDVI), normalized difference moisture index (NDMI), land surface temperature (LST), and topographic criteria (elevation, aspect, and slope). Moreover, Jaxa rainfall (mm per hour), soil properties information are also used for the physical suitability of vineyards. Besides, socio-economic criteria were collected through field surveys from Kabul Province in order to develop the socio-economic suitability map. Finally, the suitable classes were determined using weighted overly based on a reclassification of each criterion based on AHP (Analytical Hierarchy Process) weights. The results indicated that only 11.1% of areas were highly suitable, 24.8% were moderately suitable, 35.7% were marginally suitable and 28.4% were not physically suitable for grapes production. However, 15.7% were highly suitable, 17.6% were moderately suitable, 28.4% were marginally suitable and 38.3% were not socio-economically suitable for table grapes production in Kabul Province. This research could help decision-makers, growers, and other stakeholders with conducting precise land assessments by identifying the main limiting factors for the production of table grapes management and able to increase land productivity more precisely.

Keywords: vineyards, land physical suitability, socio-economic suitability, AHP

Procedia PDF Downloads 166
13436 Numerical Analysis of Shear Crack Propagation in a Concrete Beam without Transverse Reinforcement

Authors: G. A. Rombach, A. Faron

Abstract:

Crack formation and growth in reinforced concrete members are, in many cases, the cause of the collapse of technical structures. Such serious failures impair structural behavior and can also damage property and persons. An intensive investigation of the crack propagation is indispensable. Numerical methods are being developed to analyze crack growth in an element and to detect fracture failure at an early stage. For reinforced concrete components, however, further research and action are required in the analysis of shear cracks. This paper presents numerical simulations and continuum mechanical modeling of bending shear crack propagation in a three-dimensional reinforced concrete beam without transverse reinforcement. The analysis will provide a further understanding of crack growth and redistribution of inner forces in concrete members. As a numerical method to map discrete cracks, the extended finite element method (XFEM) is applied. The crack propagation is compared with the smeared crack approach using concrete damage plasticity. For validation, the crack patterns of real experiments are compared with the results of the different finite element models. The evaluation is based on single span beams under bending. With the analysis, it is possible to predict the fracture behavior of concrete members.

Keywords: concrete damage plasticity, crack propagation, extended finite element method, fracture mechanics

Procedia PDF Downloads 116