Search results for: convolution code
1053 Artificial Intelligence in the Design of a Retaining Structure
Authors: Kelvin Lo
Abstract:
Nowadays, numerical modelling in geotechnical engineering is very common but sophisticated. Many advanced input settings and considerable computational efforts are required to optimize the design to reduce the construction cost. To optimize a design, it usually requires huge numerical models. If the optimization is conducted manually, there is a potentially dangerous consequence from human errors, and the time spent on the input and data extraction from output is significant. This paper presents an automation process introduced to numerical modelling (Plaxis 2D) of a trench excavation supported by a secant-pile retaining structure for a top-down tunnel project. Python code is adopted to control the process, and numerical modelling is conducted automatically in every 20m chainage along the 200m tunnel, with maximum retained height occurring in the middle chainage. Python code continuously changes the geological stratum and excavation depth under groundwater flow conditions in each 20m section. It automatically conducts trial and error to determine the required pile length and the use of props to achieve the required factor of safety and target displacement. Once the bending moment of the pile exceeds its capacity, it will increase in size. When the pile embedment reaches the default maximum length, it will turn on the prop system. Results showed that it saves time, increases efficiency, lowers design costs, and replaces human labor to minimize error.Keywords: automation, numerical modelling, Python, retaining structures
Procedia PDF Downloads 511052 Finite Difference Modelling of Temperature Distribution around Fire Generated Heat Source in an Enclosure
Authors: A. A. Dare, E. U. Iniegbedion
Abstract:
Industrial furnaces generally involve enclosures of fire typically initiated by the combustion of gases. The fire leads to temperature distribution inside the enclosure. A proper understanding of the temperature and velocity distribution within the enclosure is often required for optimal design and use of the furnace. This study was therefore directed at numerical modeling of temperature distribution inside an enclosure as typical in a furnace. A mathematical model was developed from the conservation of mass, momentum and energy. The stream function-vorticity formulation of the governing equations was solved by an alternating direction implicit (ADI) finite difference technique. The finite difference formulation obtained were then developed into a computer code. This was used to determine the temperature, velocities, stream function and vorticity. The effect of the wall heat conduction was also considered, by assuming a one-dimensional heat flow through the wall. The computer code (MATLAB program) developed was used for the determination of the aforementioned variables. The results obtained showed that the transient temperature distribution assumed a uniform profile which becomes more chaotic with increasing time. The vertical velocity showed increasing turbulent behavior with time, while the horizontal velocity assumed decreasing laminar behavior with time. All of these behaviours were equally reported in the literature. The developed model has provided understanding of heat transfer process in an industrial furnace.Keywords: heat source, modelling, enclosure, furnace
Procedia PDF Downloads 2551051 The Effect of a Saturated Kink on the Dynamics of Tungsten Impurities in the Plasma Core
Authors: H. E. Ferrari, R. Farengo, C. F. Clauser
Abstract:
Tungsten (W) will be used in ITER as one of the plasma facing components (PFCs). The W could migrate to the plasma center. This could have a potentially deleterious effect on plasma confinement. Electron cyclotron resonance heating (ECRH) can be used to prevent W accumulation. We simulated a series of H mode discharges in ASDEX U with PFC containing W, where central ECRH was used to prevent W accumulation in the plasma center. The experiments showed that the W density profiles were flat after a sawtooth crash, and become hollow in between sawtooth crashes when ECRH has been applied. It was also observed that a saturated kink mode was active in these conditions. We studied the effect of saturated kink like instabilities on the redistribution of W impurities. The kink was modeled as the sum of a simple analytical equilibrium (large aspect ratio, circular cross section) plus the perturbation produced by the kink. A numerical code that follows the exact trajectories of the impurity ions in the total fields and includes collisions was employed. The code is written in Cuda C and runs in Graphical Processing Units (GPUs), allowing simulations with a large number of particles with modest resources. Our simulations show that when the W ions have a thermal velocity distribution, the kink has no effect on the W density. When we consider the plasma rotation, the kink can affect the W density. When the average passing frequency of the W particles is similar to the frequency of the kink mode, the expulsion of W ions from the plasma core is maximum, and the W density shows a hollow structure. This could have implications for the mitigation of W accumulation.Keywords: impurity transport, kink instability, tungsten accumulation, tungsten dynamics
Procedia PDF Downloads 1711050 The Factors Constitute the Interaction between Teachers and Students: An Empirical Study at the Notion of Framing
Authors: Tien-Hui Chiang
Abstract:
The code theory, proposed by Basil Bernstein, indicates that framing can be viewed as the core element in constituting the phenomenon of cultural reproduction because it is able to regulate the transmission of pedagogical information. Strong framing increases the social relation boundary between a teacher and pupils, which obstructs information transmission, so that in order to improve underachieving students’ academic performances, teachers need to reduce to strength of framing. Weak framing enables them to transform academic knowledge into commonsense knowledge in daily life language. This study posits that most teachers would deliver strong framing due to their belief mainly confined within the aspect of instrumental rationality that deprives their critical minds. This situation could make them view the normal distribution bell curve of students’ academic performances as a natural outcome. In order to examine the interplay between framing, instrumental rationality and pedagogical action, questionnaires were completed by over 5,000 primary school teachers in Henan province, China, who were stratified sample. The statistical results show that most teachers employed psychological concepts to measure students’ academic performances and, in turn, educational inequity was legitimatized as a natural outcome in the efficiency-led approach. Such efficiency-led minds made them perform as the agent practicing the mechanism of social control and in turn sustaining the phenomenon of cultural reproduction.Keywords: code, cultural reproduction, framing, instrumental rationality, social relation and interaction
Procedia PDF Downloads 1511049 The Human Rights Code: Fundamental Rights as the Basis of Human-Robot Coexistence
Authors: Gergely G. Karacsony
Abstract:
Fundamental rights are the result of thousand years’ progress of legislation, adjudication and legal practice. They serve as the framework of peaceful cohabitation of people, protecting the individual from any abuse by the government or violation by other people. Artificial intelligence, however, is the development of the very recent past, being one of the most important prospects to the future. Artificial intelligence is now capable of communicating and performing actions the same way as humans; such acts are sometimes impossible to tell from actions performed by flesh-and-blood people. In a world, where human-robot interactions are more and more common, a new framework of peaceful cohabitation is to be found. Artificial intelligence, being able to take part in almost any kind of interaction where personal presence is not necessary without being recognized as a non-human actor, is now able to break the law, violate people’s rights, and disturb social peace in many other ways. Therefore, a code of peaceful coexistence is to be found or created. We should consider the issue, whether human rights can serve as the code of ethical and rightful conduct in the new era of artificial intelligence and human coexistence. In this paper, we will examine the applicability of fundamental rights to human-robot interactions as well as to the actions of artificial intelligence performed without human interaction whatsoever. Robot ethics has been a topic of discussion and debate of philosophy, ethics, computing, legal sciences and science fiction writing long before the first functional artificial intelligence has been introduced. Legal science and legislation have approached artificial intelligence from different angles, regulating different areas (e.g. data protection, telecommunications, copyright issues), but they are only chipping away at the mountain of legal issues concerning robotics. For a widely acceptable and permanent solution, a more general set of rules would be preferred to the detailed regulation of specific issues. We argue that human rights as recognized worldwide are able to be adapted to serve as a guideline and a common basis of coexistence of robots and humans. This solution has many virtues: people don’t need to adjust to a completely unknown set of standards, the system has proved itself to withstand the trials of time, legislation is easier, and the actions of non-human entities are more easily adjudicated within their own framework. In this paper we will examine the system of fundamental rights (as defined in the most widely accepted source, the 1966 UN Convention on Human Rights), and try to adapt each individual right to the actions of artificial intelligence actors; in each case we will examine the possible effects on the legal system and the society of such an approach, finally we also examine its effect on the IT industry.Keywords: human rights, robot ethics, artificial intelligence and law, human-robot interaction
Procedia PDF Downloads 2441048 Online Pose Estimation and Tracking Approach with Siamese Region Proposal Network
Authors: Cheng Fang, Lingwei Quan, Cunyue Lu
Abstract:
Human pose estimation and tracking are to accurately identify and locate the positions of human joints in the video. It is a computer vision task which is of great significance for human motion recognition, behavior understanding and scene analysis. There has been remarkable progress on human pose estimation in recent years. However, more researches are needed for human pose tracking especially for online tracking. In this paper, a framework, called PoseSRPN, is proposed for online single-person pose estimation and tracking. We use Siamese network attaching a pose estimation branch to incorporate Single-person Pose Tracking (SPT) and Visual Object Tracking (VOT) into one framework. The pose estimation branch has a simple network structure that replaces the complex upsampling and convolution network structure with deconvolution. By augmenting the loss of fully convolutional Siamese network with the pose estimation task, pose estimation and tracking can be trained in one stage. Once trained, PoseSRPN only relies on a single bounding box initialization and producing human joints location. The experimental results show that while maintaining the good accuracy of pose estimation on COCO and PoseTrack datasets, the proposed method achieves a speed of 59 frame/s, which is superior to other pose tracking frameworks.Keywords: computer vision, pose estimation, pose tracking, Siamese network
Procedia PDF Downloads 1531047 Image Segmentation Techniques: Review
Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo
Abstract:
Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.Keywords: clustering-based, convolution-network, edge-based, region-growing
Procedia PDF Downloads 961046 Stability of Concrete Moment Resisting Frames in View of Current Codes Requirements
Authors: Mahmoud A. Mahmoud, Ashraf Osman
Abstract:
In this study, the different approaches currently followed by design codes to assess the stability of buildings utilizing concrete moment resisting frames structural system are evaluated. For such purpose, a parametric study was performed. It involved analyzing group of concrete moment resisting frames having different slenderness ratios (height/width ratios), designed for different lateral loads to vertical loads ratios and constructed using ordinary reinforced concrete and high strength concrete for stability check and overall buckling using code approaches and computer buckling analysis. The objectives were to examine the influence of such parameters that directly linked to frames’ lateral stiffness on the buildings’ stability and evaluates the code approach in view of buckling analysis results. Based on this study, it was concluded that, the most susceptible buildings to instability and magnification of second order effects are buildings having high aspect ratios (height/width ratio), having low lateral to vertical loads ratio and utilizing construction materials of high strength. In addition, the study showed that the instability limits imposed by codes are mainly mathematical to ensure reliable analysis not a physical ones and that they are in general conservative. Also, it has been shown that the upper limit set by one of the codes that second order moment for structural elements should be limited to 1.4 the first order moment is not justified, instead, the overall story check is more reliable.Keywords: buckling, lateral stability, p-delta, second order
Procedia PDF Downloads 2561045 A User-Directed Approach to Optimization via Metaprogramming
Authors: Eashan Hatti
Abstract:
In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.Keywords: optimization, metaprogramming, logic programming, abstraction
Procedia PDF Downloads 871044 A Framework for Secure Information Flow Analysis in Web Applications
Authors: Ralph Adaimy, Wassim El-Hajj, Ghassen Ben Brahim, Hazem Hajj, Haidar Safa
Abstract:
Huge amounts of data and personal information are being sent to and retrieved from web applications on daily basis. Every application has its own confidentiality and integrity policies. Violating these policies can have broad negative impact on the involved company’s financial status, while enforcing them is very hard even for the developers with good security background. In this paper, we propose a framework that enforces security-by-construction in web applications. Minimal developer effort is required, in a sense that the developer only needs to annotate database attributes by a security class. The web application code is then converted into an intermediary representation, called Extended Program Dependence Graph (EPDG). Using the EPDG, the provided annotations are propagated to the application code and run against generic security enforcement rules that were carefully designed to detect insecure information flows as early as they occur. As a result, any violation in the data’s confidentiality or integrity policies is reported. As a proof of concept, two PHP web applications, Hotel Reservation and Auction, were used for testing and validation. The proposed system was able to catch all the existing insecure information flows at their source. Moreover and to highlight the simplicity of the suggested approaches vs. existing approaches, two professional web developers assessed the annotation tasks needed in the presented case studies and provided a very positive feedback on the simplicity of the annotation task.Keywords: web applications security, secure information flow, program dependence graph, database annotation
Procedia PDF Downloads 4711043 Evaluation of Prestressed Reinforced Concrete Slab Punching Shear Using Finite Element Method
Authors: Zhi Zhang, Liling Cao, Seyedbabak Momenzadeh, Lisa Davey
Abstract:
Reinforced concrete (RC) flat slab-column systems are commonly used in residential or office buildings, as the flat slab provides efficient clearance resulting in more stories at a given height than regular reinforced concrete beam-slab system. Punching shear of slab-column joints is a critical component of two-way reinforced concrete flat slab design. The unbalanced moment at the joint is transferred via slab moment and shear forces. ACI 318 provides an equation to evaluate the punching shear under the design load. It is important to note that the design code considers gravity and environmental load when considering the design load combinations, while it does not consider the effect from differential foundation settlement, which may be a governing load condition for the slab design. This paper describes how prestressed reinforced concrete slab punching shear is evaluated based on ACI 318 provisions and finite element analysis. A prestressed reinforced concrete slab under differential settlements is studied using the finite element modeling methodology. The punching shear check equation is explained. The methodology to extract data for punching shear check from the finite element model is described and correlated with the corresponding code provisions. The study indicates that the finite element analysis results should be carefully reviewed and processed in order to perform accurate punching shear evaluation. Conclusions are made based on the case studies to help engineers understand the punching shear behavior in prestressed and non-prestressed reinforced concrete slabs.Keywords: differential settlement, finite element model, prestressed reinforced concrete slab, punching shear
Procedia PDF Downloads 1301042 The Construction of the Bridge between Mrs Dalloway and to the Lighthouse: The Combination of Codes and Metaphors in the Structuring of the Plot in the Work of Virginia Woolf
Authors: María Rosa Mucci
Abstract:
Tzvetan Todorov (1971) designs a model of narrative transformation where the plot is constituted by difference and resemblance. This binary opposition is a synthesis of a central figure within narrative discourse: metaphor. Narrative operates as a metaphor since it combines different actions through similarities within a common plot. However, it sounds paradoxical that metonymy and not metaphor should be the key figure within the narrative. It is a metonymy that keeps the movement of actions within the story through syntagmatic relations. By the same token, this articulation of verbs makes it possible for the reader to engage in a dynamic interaction with the text, responding to the plot and mediating meanings with the contradictory external world. As Roland Barthes (1957) points out, there are two codes that are irreversible within the process: the codes of actions and the codes of enigmas. Virginia Woolf constructs her plots through a process of symbolism; a scene is always enduring, not only because it stands for something else but also because it connotes it. The reader is forced to elaborate the meaning at a mythological level beyond the lines. In this research, we follow a qualitative content analysis to code language through the proairetic (actions) and hermeneutic (enigmas) codes in terms of Barthes. There are two novels in particular that engage the reader in this process of construction: Mrs Dalloway (1925) and To the Lighthouse (1927). The bridge from the first to the second brings memories of childhood, allowing for the discovery of these enigmas hidden between the lines. What survives? Who survives? It is the reader's task to unravel these codes and rethink this dialogue between plot and reader to contribute to the predominance of texts and the textuality of narratives.Keywords: metonymy, code, metaphor, myth, textuality
Procedia PDF Downloads 581041 An Improved Convolution Deep Learning Model for Predicting Trip Mode Scheduling
Authors: Amin Nezarat, Naeime Seifadini
Abstract:
Trip mode selection is a behavioral characteristic of passengers with immense importance for travel demand analysis, transportation planning, and traffic management. Identification of trip mode distribution will allow transportation authorities to adopt appropriate strategies to reduce travel time, traffic and air pollution. The majority of existing trip mode inference models operate based on human selected features and traditional machine learning algorithms. However, human selected features are sensitive to changes in traffic and environmental conditions and susceptible to personal biases, which can make them inefficient. One way to overcome these problems is to use neural networks capable of extracting high-level features from raw input. In this study, the convolutional neural network (CNN) architecture is used to predict the trip mode distribution based on raw GPS trajectory data. The key innovation of this paper is the design of the layout of the input layer of CNN as well as normalization operation, in a way that is not only compatible with the CNN architecture but can also represent the fundamental features of motion including speed, acceleration, jerk, and Bearing rate. The highest prediction accuracy achieved with the proposed configuration for the convolutional neural network with batch normalization is 85.26%.Keywords: predicting, deep learning, neural network, urban trip
Procedia PDF Downloads 1381040 Numerical Simulation of Free Surface Water Wave for the Flow Around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method
Authors: Omar Imine, Mohammed Aounallah, Mustapha Belkadi
Abstract:
Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation, a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of the fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRICscheme for VOF discretization. The results obtained compare well with the experimental data.Keywords: free surface flows, breaking waves, boundary layer, Wigley hull, volume of fluid
Procedia PDF Downloads 3761039 Self-Attention Mechanism for Target Hiding Based on Satellite Images
Authors: Hao Yuan, Yongjian Shen, Xiangjun He, Yuheng Li, Zhouzhou Zhang, Pengyu Zhang, Minkang Cai
Abstract:
Remote sensing data can provide support for decision-making in disaster assessment or disaster relief. The traditional processing methods of sensitive targets in remote sensing mapping are mainly based on manual retrieval and image editing tools, which are inefficient. Methods based on deep learning for sensitive target hiding are faster and more flexible. But these methods have disadvantages in training time and cost of calculation. This paper proposed a target hiding model Self Attention (SA) Deepfill, which used self-attention modules to replace part of gated convolution layers in image inpainting. By this operation, the calculation amount of the model becomes smaller, and the performance is improved. And this paper adds free-form masks to the model’s training to enhance the model’s universal. The experiment on an open remote sensing dataset proved the efficiency of our method. Moreover, through experimental comparison, the proposed method can train for a longer time without over-fitting. Finally, compared with the existing methods, the proposed model has lower computational weight and better performance.Keywords: remote sensing mapping, image inpainting, self-attention mechanism, target hiding
Procedia PDF Downloads 1361038 ‘BEST BARK’ Dog Care and Owner Consultation System
Authors: Shalitha Jayasekara, Saluk Bawantha, Dinithi Anupama, Isuru Gunarathne, Pradeepa Bandara, Hansi De Silva
Abstract:
Dogs have been known as "man's best friend" for generations, providing friendship and loyalty to their human counterparts. However, due to people's busy lives, they are unaware of the ailments that can affect their pets. However, in recent years, mobile technologies have had a significant impact on our lives, and with technological improvements, a rule-based expert system allows the end-user to enable new types of healthcare systems. The advent of Android OS-based smartphones with more user-friendly interfaces and lower pricing opens new possibilities for continuous monitoring of pets' health conditions, such as healthy dogs, dangerous ingestions, and swallowed objects. The proposed ‘Best Bark’ Dog care and owner consultation system is a mobile application for dog owners. Four main components for dog owners were implemented after a questionnaire was distributed to the target group of audience and the findings were evaluated. The proposed applications are widely used to provide health and clinical support to dog owners, including suggesting exercise and diet plans and answering queries about their dogs. Additionally, after the owner uploads a photo of the dog, the application provides immediate feedback and a description of the dog's skin disease.Keywords: Convolution Neural Networks, Artificial Neural Networks, Knowledgebase, Sentimental Analysis.
Procedia PDF Downloads 1531037 Frequency Modulation Continuous Wave Radar Human Fall Detection Based on Time-Varying Range-Doppler Features
Authors: Xiang Yu, Chuntao Feng, Lu Yang, Meiyang Song, Wenhao Zhou
Abstract:
The existing two-dimensional micro-Doppler features extraction ignores the correlation information between the spatial and temporal dimension features. For the range-Doppler map, the time dimension is introduced, and a frequency modulation continuous wave (FMCW) radar human fall detection algorithm based on time-varying range-Doppler features is proposed. Firstly, the range-Doppler sequence maps are generated from the echo signals of the continuous motion of the human body collected by the radar. Then the three-dimensional data cube composed of multiple frames of range-Doppler maps is input into the three-dimensional Convolutional Neural Network (3D CNN). The spatial and temporal features of time-varying range-Doppler are extracted by the convolution layer and pool layer at the same time. Finally, the extracted spatial and temporal features are input into the fully connected layer for classification. The experimental results show that the proposed fall detection algorithm has a detection accuracy of 95.66%.Keywords: FMCW radar, fall detection, 3D CNN, time-varying range-doppler features
Procedia PDF Downloads 1221036 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings
Authors: Youlu Huang, Huanjun Jiang
Abstract:
A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.Keywords: old buildings, adding story, seismic strengthening, seismic performance
Procedia PDF Downloads 1211035 Audit Examining Maternity Assessment Suite Triage Compliance with Birmingham Symptom Specific Obstetric Triage System in a London Teaching Hospital
Authors: Sarah Atalla, Shubham Gupta, Kim Alipio, Tanya Maric
Abstract:
Background: Chelsea and Westminster Hospital have introduced the Birmingham Symptom Specific Obstetric Triage System (BSOTS) for patients who present acutely to the Maternity Assessment Suite (MAS) to prioritise care by urgency. The primary objective was to evaluate whether BSOTS was used appropriately to assess patients (defined as a 90% threshold). The secondary objective was to assess whether patients were seen within their designated triaged timeframe (defined as a 90% threshold). Methodology: MAS records were retrospectively reviewed for a randomly selected one-week period of data from 2020 (21/09/2020 - 27/09/2020). 189 patients presented to MAS during this time. Data were collected on the presenting complaint, time of attendance (divided into four time categories), and triage colour code for the urgency of a review by a doctor (red: immediately, orange: within 15 minutes, yellow: within 1 hour, green: within 4 hours). The number of triage waiting times that were breached and the outcome of the attendance was noted. Results: 49% of patients presenting to MAS during this time period were triaged, which therefore did not meet the 90% target. 67% of patients who were triaged were seen within their allocated timeframe as designated by their triage colour code, which therefore did not meet the 90% target. The most frequent reason for patient attendance was reduced fetal movements (30.5% of attendances). The busiest time of day (when most patients presented) was between 06:01-12:00, and this was also when the highest number of patients were not triaged (26 patients or 54% of patients presenting in this time category). The most used triage category (59%) was the green colour code (to be seen by a doctor within 4 hours), followed by orange (24%), yellow (14%), and red (3%). 45% of triaged patients were admitted, whilst 55% were discharged. 62% of patients allocated to the green triage category were discharged, as compared to 56% of yellow category patients, 27% of orange category patients, and 50% of red category patients. The time of patient presentation to the hospital was also associated with the level of urgency and outcome. Patients presenting from 12:01 to 18:00 were more likely to be discharged (72% discharged) compared to 00:01-06:00 where only 12.5% of patients were discharged. Conclusion: The triage system for assessing the urgency of acutely presenting obstetric patients is only being effectively utilised for 49% of patients. There is potential for enhancing the employment of the triage system to enable further efficiency and boost the promotion of patient safety. It is noted that MAS was busiest at 06:01 - 12:00 when there was also the highest number of non-triaged patients – this highlights some areas where we can improve, including higher levels of staffing, better use of BSOTS to triage patients, and patient education.Keywords: birmingham, BSOTS, maternal, obstetric, pregnancy, specific, symptom, triage
Procedia PDF Downloads 1051034 The Position of Islamic Jurisprudence in UAE Private Law: Analytical Study
Authors: Iyad Jadalhaq, Mohammed El Hadi El Maknouzi
Abstract:
The place of Islamic law in the legal system of the UAE is best understood by introducing a differentiation between its role as a formal source of law and its influence as a material source of law. What this differentiation helps clarify is that the corpus of Islamic law constitutes a much deeper influence on adjudication, law-making and the legal profession in the UAE, than it might appear at first sight, by considering its formal position in the division of labor between courts, or legislative lists of sources of law. This paper aims to examine the role of Shariah in the UAE private law system by determining the comprehensiveness of Sharia in the legal system as a whole, and not in a limited way related to it as a source of law according to Article 1 of the Civil Transactions Law. Turning to the role of the Shariah as a formal source of law, it is useful to start from Article 1 of the UAE Civil Code. This provision lays out the formal hierarchy of sources of UAE private law, these being legislation, Islamic law, and custom. Hence, when deciding a civil dispute, a judge should first refer to positive legislation in force in the UAE. Lacking the rule to cover the case before him/her, the judge ought then to refer directly to Islamic law. If the matter lacks regulation in Islamic law, only then may the judge appeal to custom. Accordingly, in connection to civil transactions, Shariah is presented here, formally, as the second source of law. Still, Shariah law addresses many other issues beyond civil transactions, including matters of morals, worship, and belief. However, in Article 1 of the UAE Civil Code, the reference to Islamic law ought to be understood as limited to the rules it lays out for civil transactions. There are four main sets of courts in the judicial systems of the UAE, whose competence is based on whether a dispute touches upon civil and commercial transactions, criminal offenses, personal statuses, or labor relations. This sectorial and multi-tiered organization of courts as a whole constitutes an institutional development compatible with the long-standing affirmation in the Shariah of the legitimacy of the judiciary. Indeed, Islamic law authorizes the governing authorities to organize the judiciary, including by allocating specific types of cases to particular kinds of judges depending on the value of the case, or by assigning judges to a specific place in which they are to exercise their jurisdictional function. In view of this, the contemporary organization of courts in the UAE can be regarded as an organic adaptation, aligned with Shariah rules on the assignment of jurisdictional authority, to the growing complexity of modern society. Therefore, we can conclude to the comprehensive role of Shariah in the entire legal system of the United Arab Emirates, including legislation, a judicial system, institutional, and administrative work.Keywords: Islamic jurisprudence, Shariah, UAE civil code, UAE private law
Procedia PDF Downloads 1191033 An Evaluation of Full-Scale Reinforced Concrete and Steel Girder Composite Members Using High Volume Fly-Ash
Authors: Sung-Won Yoo, Chul-Hyeon Kang, Kyoung-Tae Park, Hae-Sik Woo
Abstract:
Numerous studies were dedicated on the High Volume Fly-Ash (HVFA) concrete using high volume fly ash. The material properties of HVFA concrete have been the primordial topics of early studies, and interest shifted gradually toward the structural behavior of HVFA concrete such as elasticity modulus, stress-strain relationship, and structural behavior. However, structural studies consider small-scale members limited to the scope of reinforced concrete only. Therefore, in this paper, on the basis of recent studies on the structural behavior, 2 full-scale test members were manufactured with 7.5 m span length, fly ash replacement ratio of 50 % and concrete compressive strength of 50 MPa in order to evaluate the practicability of HVFA to real structures. In addition, 2 steel composite test members were also manufactured with span length of 3 m and using the same HVFA concrete for the same purpose. The test results of full-scale RC members showed that the practical use of HVFA on such structures is not hard despite small differences between test results and existing research results on the stress-strain relationship. The flexural test revealed very little difference between 50% fly ash concrete and general concrete in view of the similarity exhibited by the displacement and strain patterns. The experimental concrete shear strength being very close to that of design code, the existing design code can be applied. From the flexural test results of steel girder composite members, the composite behavior can be secured as much as that using normal concrete under the condition of sufficient arrangement of reinforcing bar.Keywords: composite, fly ash, full-scale, high volume
Procedia PDF Downloads 2171032 Audit Committee Characteristics and Earnings Quality of Listed Food and Beverages Firms in Nigeria
Authors: Hussaini Bala
Abstract:
There are different opinions in the literature on the relationship between Audit Committee characteristics and earnings management. The mix of opinions makes the direction of their relationship ambiguous. This study investigated the relationship between Audit Committee characteristics and earnings management of listed food and beverages Firms in Nigeria. The study covered the period of six years from 2007 to 2012. Data for the study were extracted from the Firms’ annual reports and accounts. After running the OLS regression, a robustness test was conducted for the validity of statistical inferences. The dependent variable was generated using two steps regression in order to determine the discretionary accrual of the sample Firms. Multiple regression was employed to run the data of the study using Random Model. The results from the analysis revealed a significant association between audit committee characteristics and earnings management of the Firms. While audit committee size and committees’ financial expertise showed an inverse relationship with earnings management, committee’s independence, and frequency of meetings are positively and significantly related to earnings management. In line with the findings, the study recommended among others that listed food and beverages Firms in Nigeria should strictly comply with the provision of Companies and Allied Matters Act (CAMA) and SEC Code of Corporate Governance on the issues regarding Audit Committees. Regulators such as SEC should increase the minimum number of Audit Committee members with financial expertise and also have a statutory position on the maximum number of Audit Committees meetings, which should not be greater than four meetings in a year as SEC code of corporate governance is silent on this.Keywords: audit committee, earnings management, listed Food and beverages size, leverage, Nigeria
Procedia PDF Downloads 2711031 Environmental Fatigue Analysis for Control Rod Drive Mechanisms Seal House
Authors: Xuejiao Shao, Jianguo Chen, Xiaolong Fu
Abstract:
In this paper, the elastoplastic strain correction factor computed by software of ANSYS was modified, and the fatigue usage factor in air was also corrected considering in water under reactor operating condition. The fatigue of key parts on control rod drive mechanisms was analyzed considering the influence of environmental fatigue caused by the coolant in the react pressure vessel. The elastoplastic strain correction factor was modified by analyzing thermal and mechanical loads separately referring the rules of RCC-M 2002. The new elastoplastic strain correction factor Ke(mix) is computed to replace the original Ke computed by the software of ANSYS when evaluating the fatigue produced by thermal and mechanical loads together. Based on the Ke(mix) and the usage cycle and fatigue design curves, the new range of primary plus secondary stresses was evaluated to obtain the final fatigue usage factor. The results show that the precision of fatigue usage factor can be elevated by using modified Ke when the amplify of the primary and secondary stress is large to some extent. One approach has been proposed for incorporating the environmental effects considering the effects of reactor coolant environments on fatigue life in terms of an environmental correction factor Fen, which is the ratio of fatigue life in air at room. To incorporate environmental effects into the RCCM Code fatigue evaluations, the fatigue usage factor based on the current Code design curves is multiplied by the correction factor. The contribution of environmental effects to results is discussed. Fatigue life decreases logarithmically with decreasing strain rate below 10%/s, which is insensitive to strain rate when temperatures below 100°C.Keywords: environmental fatigue, usage factor, elastoplastic strain correction factor, environmental correction
Procedia PDF Downloads 3241030 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 411029 Radiation Protection Assessment of the Emission of a d-t Neutron Generator: Simulations with MCNP Code and Experimental Measurements in Different Operating Conditions
Authors: G. M. Contessa, L. Lepore, G. Gandolfo, C. Poggi, N. Cherubini, R. Remetti, S. Sandri
Abstract:
Practical guidelines are provided in this work for the safe use of a portable d-t Thermo Scientific MP-320 neutron generator producing pulsed 14.1 MeV neutron beams. The neutron generator’s emission was tested experimentally and reproduced by MCNPX Monte Carlo code. Simulations were particularly accurate, even generator’s internal components were reproduced on the basis of ad-hoc collected X-ray radiographic images. Measurement campaigns were conducted under different standard experimental conditions using an LB 6411 neutron detector properly calibrated at three different energies, and comparing simulated and experimental data. In order to estimate the dose to the operator vs. the operating conditions and the energy spectrum, the most appropriate value of the conversion factor between neutron fluence and ambient dose equivalent has been identified, taking into account both direct and scattered components. The results of the simulations show that, in real situations, when there is no information about the neutron spectrum at the point where the dose has to be evaluated, it is possible - and in any case conservative - to convert the measured value of the count rate by means of the conversion factor corresponding to 14 MeV energy. This outcome has a general value when using this type of generator, enabling a more accurate design of experimental activities in different setups. The increasingly widespread use of this type of device for industrial and medical applications makes the results of this work of interest in different situations, especially as a support for the definition of appropriate radiation protection procedures and, in general, for risk analysis.Keywords: instrumentation and monitoring, management of radiological safety, measurement of individual dose, radiation protection of workers
Procedia PDF Downloads 1321028 An Analytical Approach of Computational Complexity for the Method of Multifluid Modelling
Authors: A. K. Borah, A. K. Singh
Abstract:
In this paper we deal building blocks of the computer simulation of the multiphase flows. Whole simulation procedure can be viewed as two super procedures; The implementation of VOF method and the solution of Navier Stoke’s Equation. Moreover, a sequential code for a Navier Stoke’s solver has been studied.Keywords: Bi-conjugate gradient stabilized (Bi-CGSTAB), ILUT function, krylov subspace, multifluid flows preconditioner, simple algorithm
Procedia PDF Downloads 5281027 Assessment of Air Pollutant Dispersion and Soil Contamination: The Critical Role of MATLAB Modeling in Evaluating Emissions from the Covanta Municipal Solid Waste Incineration Facility
Authors: Jadon Matthiasa, Cindy Donga, Ali Al Jibouria, Hsin Kuo
Abstract:
The environmental impact of emissions from the Covanta Waste-to-Energy facility in Burnaby, BC, was comprehensively evaluated, focusing on the dispersion of air pollutants and the subsequent assessment of heavy metal contamination in surrounding soils. A Gaussian Plume Model, implemented in MATLAB, was utilized to simulate the dispersion of key pollutants to understand their atmospheric behaviour and potential deposition patterns. The MATLAB code developed for this study enhanced the accuracy of pollutant concentration predictions and provided capabilities for visualizing pollutant dispersion in 3D plots. Furthermore, the code could predict the maximum concentration of pollutants at ground level, eliminating the need to use the Ranchoux model for predictions. Complementing the modelling approach, empirical soil sampling and analysis were conducted to evaluate heavy metal concentrations in the vicinity of the facility. This integrated methodology underscored the importance of computational modelling in air pollution assessment and highlighted the necessity of soil analysis to obtain a holistic understanding of environmental impacts. The findings emphasized the effectiveness of current emissions controls while advocating for ongoing monitoring to safeguard public health and environmental integrity.Keywords: air emissions, Gaussian Plume Model, MATLAB, soil contamination, air pollution monitoring, waste-to-energy, pollutant dispersion visualization, heavy metal analysis, environmental impact assessment, emission control effectiveness
Procedia PDF Downloads 161026 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time
Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma
Abstract:
Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.Keywords: multiclass classification, convolution neural network, OpenCV
Procedia PDF Downloads 1761025 A Mechanical Diagnosis Method Based on Vibration Fault Signal down-Sampling and the Improved One-Dimensional Convolutional Neural Network
Authors: Bowei Yuan, Shi Li, Liuyang Song, Huaqing Wang, Lingli Cui
Abstract:
Convolutional neural networks (CNN) have received extensive attention in the field of fault diagnosis. Many fault diagnosis methods use CNN for fault type identification. However, when the amount of raw data collected by sensors is massive, the neural network needs to perform a time-consuming classification task. In this paper, a mechanical fault diagnosis method based on vibration signal down-sampling and the improved one-dimensional convolutional neural network is proposed. Through the robust principal component analysis, the low-rank feature matrix of a large amount of raw data can be separated, and then down-sampling is realized to reduce the subsequent calculation amount. In the improved one-dimensional CNN, a smaller convolution kernel is used to reduce the number of parameters and computational complexity, and regularization is introduced before the fully connected layer to prevent overfitting. In addition, the multi-connected layers can better generalize classification results without cumbersome parameter adjustments. The effectiveness of the method is verified by monitoring the signal of the centrifugal pump test bench, and the average test accuracy is above 98%. When compared with the traditional deep belief network (DBN) and support vector machine (SVM) methods, this method has better performance.Keywords: fault diagnosis, vibration signal down-sampling, 1D-CNN
Procedia PDF Downloads 1311024 Microstructure Evolution and Pre-transformation Microstructure Reconstruction in Ti-6Al-4V Alloy
Authors: Shreyash Hadke, Manendra Singh Parihar, Rajesh Khatirkar
Abstract:
In the present investigation, the variation in the microstructure with the changes in the heat treatment conditions i.e. temperature and time was observed. Ti-6Al-4V alloy was subject to solution annealing treatments in β (1066C) and α+β phase (930C and 850C) followed by quenching, air cooling and furnace cooling to room temperature respectively. The effect of solution annealing and cooling on the microstructure was studied by using optical microscopy (OM), scanning electron microscopy (SEM), electron backscattered diffraction (EBSD) and x-ray diffraction (XRD). The chemical composition of the β phase for different conditions was determined with the help of energy dispersive spectrometer (EDS) attached to SEM. Furnace cooling resulted in the development of coarser structure (α+β), while air cooling resulted in much finer structure with widmanstatten morphology of α at the grain boundaries. Quenching from solution annealing temperature formed α’ martensite, their proportion being dependent on the temperature in β phase field. It is well known that the transformation of β to α follows Burger orientation relationship (OR). In order to reconstruct the microstructure of parent β phase, a MATLAB code was written using neighbor-to-neighbor, triplet method and Tari’s method. The code was tested on the annealed samples (1066C solution annealing temperature followed by furnace cooling to room temperature). The parent phase data thus generated was then plotted using the TSL-OIM software. The reconstruction results of the above methods were compared and analyzed. The Tari’s approach (clustering approach) gave better results compared to neighbor-to-neighbor and triplet method but the time taken by the triplet method was least compared to the other two methods.Keywords: Ti-6Al-4V alloy, microstructure, electron backscattered diffraction, parent phase reconstruction
Procedia PDF Downloads 446