Search results for: maximal prefix code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1633

Search results for: maximal prefix code

1183 The Application of King IV by Rugby Clubs Affiliated to a Rugby Union in South Africa

Authors: Anouschka Swart

Abstract:

In 2023, sport faces a plethora of challenges including but not limited to match-fixing, corruption and doping to its integrity that, threatens both the commercial and public appeal. The continuous changes and commercialisation that has occurred within sport have led to a variety of consequences resulting in the need for ethics to be revived, as it used to be in the past to ensure sport is not in danger. In order to understand governance better, the Institute of Directors in Southern Africa, a global network of professional firms providing Audit, Tax and Advisory services, outlined a process explaining all elements with regards to corporate governance. This process illustrates a governing body’s responsibilities as strategy, policy, oversight and accountability. These responsibilities are further elucidated to 16 governing principles which are highlighted as essential for all organisations in order to achieve and deliver on effective governance outcomes. These outcomes are good ethical culture, good performance, effective control and legitimacy therefore, the aim of the study was to investigate the general state of governance within the clubs affiliated with a rugby club in South Africa by utilizing the King IV Code as the framework. The results indicated that the King Code IV principles are implemented by these rugby clubs to ensure they demonstrate commitment to corporate governance to both internal and external stakeholders. It is however evident that a similar report focused solely on sport is a necessity in the industry as this will provide more clarity on sport specific problems.

Keywords: South Africa, sport, King IV, responsibilities

Procedia PDF Downloads 53
1182 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems

Authors: Georgi Y. Georgiev, Matthew Brouillet

Abstract:

This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.

Keywords: complexity, self-organization, agent based modelling, efficiency

Procedia PDF Downloads 48
1181 Estimation of Normalized Glandular Doses Using a Three-Layer Mammographic Phantom

Authors: Kuan-Jen Lai, Fang-Yi Lin, Shang-Rong Huang, Yun-Zheng Zeng, Po-Chieh Hsu, Jay Wu

Abstract:

The normalized glandular dose (DgN) estimates the energy deposition of mammography in clinical practice. The Monte Carlo simulations frequently use uniformly mixed phantom for calculating the conversion factor. However, breast tissues are not uniformly distributed, leading to errors of conversion factor estimation. This study constructed a three-layer phantom to estimated more accurate of normalized glandular dose. In this study, MCNP code (Monte Carlo N-Particles code) was used to create the geometric structure. We simulated three types of target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh), six voltages (25 ~ 35 kVp), six HVL parameters and nine breast phantom thicknesses (2 ~ 10 cm) for the three-layer mammographic phantom. The conversion factor for 25%, 50% and 75% glandularity was calculated. The error of conversion factors compared with the results of the American College of Radiology (ACR) was within 6%. For Rh/Rh, the difference was within 9%. The difference between the 50% average glandularity and the uniform phantom was 7.1% ~ -6.7% for the Mo/Mo combination, voltage of 27 kVp, half value layer of 0.34 mmAl, and breast thickness of 4 cm. According to the simulation results, the regression analysis found that the three-layer mammographic phantom at 0% ~ 100% glandularity can be used to accurately calculate the conversion factors. The difference in glandular tissue distribution leads to errors of conversion factor calculation. The three-layer mammographic phantom can provide accurate estimates of glandular dose in clinical practice.

Keywords: Monte Carlo simulation, mammography, normalized glandular dose, glandularity

Procedia PDF Downloads 172
1180 Cross Section Measurement for Formation of Metastable State of ¹¹¹ᵐCd through ¹¹¹Cd (γ, γ`) ¹¹¹ᵐCd Reaction Induced by Bremsstrahlung Generated through 6 MeV Electrons

Authors: Vishal D. Bharud, B. J. Patil, S. S. Dahiwale, V. N. Bhoraskar, S. D. Dhole

Abstract:

Photon induced average reaction cross section of ¹¹¹Cd (γ, γ`) ¹¹¹ᵐCd reaction was experimentally determined for the bremsstrahlung energy spectrum of 6 MeV by utilizing the activation and offline γ-ray spectrometric techniques. The 6 MeV electron accelerator Racetrack Microtron of Savitribai Phule Pune University, Pune was used for the experimental work. The bremsstrahlung spectrum generated by bombarding 6 MeV electrons on lead target was theoretically estimated by FLUKA code. Bremsstrahlung radiation can have energies exceeding the threshold of the particle emission, which is normally above 6 MeV. Photons of energies below the particle emission threshold undergo absorption into discrete energy levels, with possibility of exciting nuclei to excited state including metastable state. The ¹¹¹Cd (γ, γ`) ¹¹¹ᵐCd reaction cross sections were calculated at different energies of bombarding Photon by using the TALYS 1.8 computer code with a default parameter. The focus of the present work was to study the (γ,γ’) reaction for exciting ¹¹¹Cd nuclei to metastable states which have threshold energy below 3 MeV. The flux weighted average cross section was obtained from the theoretical values of TALYS 1.8 and TENDL 2017 and is found to be in good agreement with the present experimental cross section.

Keywords: bremsstrahlung, cross section, FLUKA, TALYS-1.8

Procedia PDF Downloads 153
1179 English Pashto Contact: Morphological Adaptation of Bilingual Compound Words in Pashto

Authors: Imran Ullah Imran

Abstract:

Language contact is a familiar concept in the present global world. Across the globe, languages get mixed up at different levels. Borrowing, code-switching are some of the means through which languages interact. This study examines Pashto-English contact at word and syllable levels. By recording the speech of 30 Pashto native speakers, selected via 'social network' sampling, the study located a number of Pashto-English compound words, which is a unique contact of its kind. In data analysis, tokens were categorized on the basis of their pattern and morphological structure. The study shows that Pashto-English Bilingual Compound words (BCWs) are very prevalent in the Pashto language. The study also found that the BCWs in Pashto are completely productive and have their own meanings. It also shows that the dominant pattern of hybrid words in Pashto is the conjugation of an independent English root word followed by a Pashto inflectional morpheme, which contributes to the core semantic content of the construction. The BCWs construction shows that how both the languages are closer to each other. Pashto-English contact results into bilingual compound and hybrid words, which forms a considerable number of tokens in the present-day spoken Pashto. On the basis of these findings, the study assumes that the same phenomenon may increase with the passage of time that would, in turn, result in the formation of more bilingual compound or hybrid words.

Keywords: code-mixing, bilingual compound words, pashto-english contact, hybrid words, inflectional lexical morpheme

Procedia PDF Downloads 232
1178 Correlation of Strength and Change in the Thickness of Back Extensor Muscles during Maximal Isometric Contraction in Healthy and Osteoporotic Postmenopausal Women

Authors: Mohammad Jan-Nataj Zeinab, Kahrizi Sedighe, Bayat Noshin, Giti Torkaman

Abstract:

According to the importance of the back extensor muscle strength in postmenopausal women, this study aimed to determine the relationship between strength and changes in the thickness of back extensor muscles during isometric contraction in healthy and osteoporotic postmenopausal women. Strength and thickness of the muscles of 42 postmenopausal women were measured respectively, using a handheld dynamometer and ultrasonography. Also, the Pearson correlation coefficient was used to analyze the relationship between the strength and thickness. The results indicated a high reproducibility dynamometer test and ultrasonography. The decrease of strength in people with osteoporosis, occurred more through changes in muscle structure such as reducing the number and size of muscle fibers than changes in the nervous system part.

Keywords: back extensor muscles, strength, thickness, osteoporosis

Procedia PDF Downloads 243
1177 Hypothesis about the Origin of the Lighting

Authors: Igor Kuzminov

Abstract:

Till now, the nature of lightning is not established. A hypothesis of the origin of lightning is proposed. The lightning charge is formed by electromagnetic induction. The role of the conductor is performed by the air mass of the cloud. This conductor moves in the Earth's magnetic field. The upper and lower edges of the cloud are the plates of the capacitor. Lightning is a special case of electromagnetic processes in an atmosphere. The category of lightning occurs in the process of accumulation of a charge. The process of accumulation goes constantly, but the charge is not fixed. Naturally, the hypothesis demands the carrying out of additional experiments and official acknowledgement. As the proof of a hypothesis can serve that the maximal lighting activity in an equatorial zone where cosφ it is close to 1. An experiment conducted privately showed that there is a potential difference in the atmosphere at different levels. The probability of applied value development of power installation is great.

Keywords: electromagnetic induction, Earth's magnetic field, plates of the capacitors, charge accumulation

Procedia PDF Downloads 72
1176 Comparative Safety Performance Evaluation of Profiled Deck Composite Slab from the Use of Slope-Intercept and Partial Shear Methods

Authors: Izian Abd. Karim, Kachalla Mohammed, Nora Farah Abd Aznieta Aziz, Law Teik Hua

Abstract:

The economic use and ease of construction of profiled deck composite slab is marred with the complex and un-economic strength verification required for the serviceability and general safety considerations. Beside these, albeit factors such as shear span length, deck geometries and mechanical frictions greatly influence the longitudinal shear strength, that determines the ultimate strength of profiled deck composite slab, and number of methods available for its determination; partial shear and slope-intercept are the two methods according to Euro-code 4 provision. However, the complexity associated with shear behavior of profiled deck composite slab, the use of these methods in determining the load carrying capacities of such slab yields different and conflicting values. This couple with the time and cost constraint associated with the strength verification is a source of concern that draws more attentions nowadays, the issue is critical. Treating some of these known shear strength influencing factors as random variables, the load carrying capacity violation of profiled deck composite slab from the use of the two-methods defined according to Euro-code 4 are determined using reliability approach, and comparatively studied. The study reveals safety values from the use of m-k method shows good standing compared with that from the partial shear method.

Keywords: composite slab, first order reliability method, longitudinal shear, partial shear connection, slope-intercept

Procedia PDF Downloads 340
1175 Investigation of Minor Actinide-Contained Thorium Fuel Impacts on CANDU-Type Reactor Neutronics Using Computational Method

Authors: S. A. H. Feghhi, Z. Gholamzadeh, Z. Alipoor, C. Tenreiro

Abstract:

Currently, thorium fuel has been especially noticed because of its proliferation resistance than long half-life alpha emitter minor actinides, breeding capability in fast and thermal neutron flux and mono-isotopic naturally abundant. In recent years, efficiency of minor actinide burning up in PWRs has been investigated. Hence, a minor actinide-contained thorium based fuel matrix can confront both proliferation resistance and nuclear waste depletion aims. In the present work, minor actinide depletion rate in a CANDU-type nuclear core modeled using MCNP code has been investigated. The obtained effects of minor actinide load as mixture of thorium fuel matrix on the core neutronics has been studiedwith comparingpresence and non-presence of minor actinide component in the fuel matrix.Depletion rate of minor actinides in the MA-contained fuel has been calculated using different power loads.According to the obtained computational data, minor actinide loading in the modeled core results in more negative reactivity coefficients. The MA-contained fuel achieves less radial peaking factor in the modeled core. The obtained computational results showed 140 kg of 464 kg initial load of minor actinide has been depleted in during a 6-year burn up in 10 MW power.

Keywords: minor actinide burning, CANDU-type reactor, MCNPX code, neutronic parameters

Procedia PDF Downloads 442
1174 Blade Runner and Slavery in the 21st Century

Authors: Bülent Diken

Abstract:

This paper looks to set Ridley Scott’s original film Blade Runner (1982) and Denis Villeneuve’s Blade Runner 2049 (2017) in order to provide an analysis of both films with respect to the new configurations of slavery in the 21st century. Both Blade Runner films present a de-politicized society that oscillates between two extremes: the spectral (the eye, optics, digital communications) and the biopolitical (the body, haptics). On the one hand, recognizing the subject only as a sign, the society of the spectacle registers, identifies, produces and reproduces the subject as a code. At the same time, though, the subject is constantly reduced to a naked body, to bare life, for biometric technologies to scan it as a biological body or body parts. Being simultaneously a pure code (word without body) and an instrument slave (body without word), the replicants are thus the paradigmatic subjects of this society. The paper focuses first on the similarity: both films depict a relationship between masters and slaves, that is, a despotic relationship. The master uses the (body of the) slave as an instrument, as an extension of his own body. Blade Runner 2019 frames the despotic relation in this classical way through its triangulation with the economy (the Tyrell Corporation) and the slave-replicants’ dissent (rejecting their reduction to mere instruments). In a counter-classical approach, in Blade Runner 2049, the focus shifts to another triangulation: despotism, economy (the Wallace Corporation) and consent (of replicants who no longer perceive themselves as slaves).

Keywords: Blade Runner, the spectacle, bio-politics, slavery, imstrumentalisation

Procedia PDF Downloads 52
1173 Fusion Neutron Generator Dosimetry and Applications for Medical, Security, and Industry

Authors: Kaouther Bergaui, Nafaa Reguigui, Charles Gary

Abstract:

Characterization and the applications of deuterium-deuterium (DD) neutron generator developed by Adelphie technology and acquired by the National Centre of Nuclear Science and Technology (NCNST) were presented in this work. We study the performance of the neutron generator in terms of neutron yield, production efficiency, and the ionic current as a function of the acceleration voltage at various RF powers. We provide the design and optimization of the PGNAA chamber and thus give insight into the capabilities of the planned PGNAA facility. Additional non-destructive techniques were studied employing the DD neutron generator, such as PGNAA and neutron radiography: The PGNAA is used for determining the concentration of 10B in Si and SiO2 matrices by using a germanium detector HPGe and the results obtained are compared with PGNAA system using a Sodium Iodide detector (NaI (Tl)); Neutron radiography facility was tested and simulated, using a camera device CCD and simulated by the Monte Carlo code; and the explosive detection system (EDS) also simulated using the Monte Carlo code. The study allows us to show that the new models of DD neutron generators are feasible and that superior-quality neutron beams could be produced and used for various applications. The feasibility of Boron neutron capture therapy (BNCT) for cancer treatment using a neutron generator was assessed by optimizing Beam Shaping Assembly (BSA) on a phantom using Monte-Carlo (MCNP6) simulations.

Keywords: neutron generator deuterium-deuterium, Monte Carlo method, radiation, neutron flux, neutron activation analysis, born, neutron radiography, explosive detection, BNCT

Procedia PDF Downloads 172
1172 Software Transactional Memory in a Dynamic Programming Language at Virtual Machine Level

Authors: Szu-Kai Hsu, Po-Ching Lin

Abstract:

As more and more multi-core processors emerge, traditional sequential programming paradigm no longer suffice. Yet only few modern dynamic programming languages can leverage such advantage. Ruby, for example, despite its wide adoption, only includes threads as a simple parallel primitive. The global virtual machine lock of official Ruby runtime makes it impossible to exploit full parallelism. Though various alternative Ruby implementations do eliminate the global virtual machine lock, they only provide developers dated locking mechanism for data synchronization. However, traditional locking mechanism error-prone by nature. Software Transactional Memory is one of the promising alternatives among others. This paper introduces a new virtual machine: GobiesVM to provide a native software transactional memory based solution for dynamic programming languages to exploit parallelism. We also proposed a simplified variation of Transactional Locking II algorithm. The empirical results of our experiments show that support of STM at virtual machine level enables developers to write straightforward code without compromising parallelism or sacrificing thread safety. Existing source code only requires minimal or even none modi cation, which allows developers to easily switch their legacy codebase to a parallel environment. The performance evaluations of GobiesVM also indicate the difference between sequential and parallel execution is significant.

Keywords: global interpreter lock, ruby, software transactional memory, virtual machine

Procedia PDF Downloads 264
1171 The Role of the Accused’s Attorney in the Criminal Justice System of Iran, Mashhad 2014

Authors: Mahdi Karimi

Abstract:

One of the most basic standards of fair trial is the right to defense, hire an attorney and its presence in the hearing stages. On the one hand, based on the reason and justice, as the legal issues, particularly criminal affairs, become complicated, the accused must benefit from an attorney in the court in order to defend itself which requires legal knowledge. On the other hand, as the judicial system has jurists such as investigation judges at its disposal, the accused must enjoy the same right to defend itself and reject allegations so that the balance is maintained between the litigating parties based on the principle of "equality of arms". The right to adequate time and facilities for defense is cited among the principles and rights relevant to the proceedings in international regulations such as the International Covenant on Civil and Political Rights. The innovations made in the Code of Criminal Procedure in 2013 guaranteed the presence of the accused’s attorney in the proceedings. The present study aims at assessing the result of the aforementioned guarantee in practice and made attempts to investigate the effect of the presence of accused’s attorney on reducing the punishment by asking the question and addressing the statistical population of this study including 48 judges of lower courts and courts of appeal. It seems that in despite of guarantees provided in the new Code of Criminal Procedure, Iran's penal system, does not tolerate the presence of an attorney in practice.

Keywords: defense attorney, equality of arms, fair trial, reducing the penalty, right to defense

Procedia PDF Downloads 314
1170 Programming without Code: An Approach and Environment to Conditions-On-Data Programming

Authors: Philippe Larvet

Abstract:

This paper presents the concept of an object-based programming language where tests (if... then... else) and control structures (while, repeat, for...) disappear and are replaced by conditions on data. According to the object paradigm, by using this concept, data are still embedded inside objects, as variable-value couples, but object methods are expressed into the form of logical propositions (‘conditions on data’ or COD).For instance : variable1 = value1 AND variable2 > value2 => variable3 = value3. Implementing this approach, a central inference engine turns and examines objects one after another, collecting all CODs of each object. CODs are considered as rules in a rule-based system: the left part of each proposition (left side of the ‘=>‘ sign) is the premise and the right part is the conclusion. So, premises are evaluated and conclusions are fired. Conclusions modify the variable-value couples of the object and the engine goes to examine the next object. The paper develops the principles of writing CODs instead of complex algorithms. Through samples, the paper also presents several hints for implementing a simple mechanism able to process this ‘COD language’. The proposed approach can be used within the context of simulation, process control, industrial systems validation, etc. By writing simple and rigorous conditions on data, instead of using classical and long-to-learn languages, engineers and specialists can easily simulate and validate the functioning of complex systems.

Keywords: conditions on data, logical proposition, programming without code, object-oriented programming, system simulation, system validation

Procedia PDF Downloads 203
1169 Artificial Intelligence in the Design of a Retaining Structure

Authors: Kelvin Lo

Abstract:

Nowadays, numerical modelling in geotechnical engineering is very common but sophisticated. Many advanced input settings and considerable computational efforts are required to optimize the design to reduce the construction cost. To optimize a design, it usually requires huge numerical models. If the optimization is conducted manually, there is a potentially dangerous consequence from human errors, and the time spent on the input and data extraction from output is significant. This paper presents an automation process introduced to numerical modelling (Plaxis 2D) of a trench excavation supported by a secant-pile retaining structure for a top-down tunnel project. Python code is adopted to control the process, and numerical modelling is conducted automatically in every 20m chainage along the 200m tunnel, with maximum retained height occurring in the middle chainage. Python code continuously changes the geological stratum and excavation depth under groundwater flow conditions in each 20m section. It automatically conducts trial and error to determine the required pile length and the use of props to achieve the required factor of safety and target displacement. Once the bending moment of the pile exceeds its capacity, it will increase in size. When the pile embedment reaches the default maximum length, it will turn on the prop system. Results showed that it saves time, increases efficiency, lowers design costs, and replaces human labor to minimize error.

Keywords: automation, numerical modelling, Python, retaining structures

Procedia PDF Downloads 38
1168 Finite Difference Modelling of Temperature Distribution around Fire Generated Heat Source in an Enclosure

Authors: A. A. Dare, E. U. Iniegbedion

Abstract:

Industrial furnaces generally involve enclosures of fire typically initiated by the combustion of gases. The fire leads to temperature distribution inside the enclosure. A proper understanding of the temperature and velocity distribution within the enclosure is often required for optimal design and use of the furnace. This study was therefore directed at numerical modeling of temperature distribution inside an enclosure as typical in a furnace. A mathematical model was developed from the conservation of mass, momentum and energy. The stream function-vorticity formulation of the governing equations was solved by an alternating direction implicit (ADI) finite difference technique. The finite difference formulation obtained were then developed into a computer code. This was used to determine the temperature, velocities, stream function and vorticity. The effect of the wall heat conduction was also considered, by assuming a one-dimensional heat flow through the wall. The computer code (MATLAB program) developed was used for the determination of the aforementioned variables. The results obtained showed that the transient temperature distribution assumed a uniform profile which becomes more chaotic with increasing time. The vertical velocity showed increasing turbulent behavior with time, while the horizontal velocity assumed decreasing laminar behavior with time. All of these behaviours were equally reported in the literature. The developed model has provided understanding of heat transfer process in an industrial furnace.

Keywords: heat source, modelling, enclosure, furnace

Procedia PDF Downloads 243
1167 The Effect of a Saturated Kink on the Dynamics of Tungsten Impurities in the Plasma Core

Authors: H. E. Ferrari, R. Farengo, C. F. Clauser

Abstract:

Tungsten (W) will be used in ITER as one of the plasma facing components (PFCs). The W could migrate to the plasma center. This could have a potentially deleterious effect on plasma confinement. Electron cyclotron resonance heating (ECRH) can be used to prevent W accumulation. We simulated a series of H mode discharges in ASDEX U with PFC containing W, where central ECRH was used to prevent W accumulation in the plasma center. The experiments showed that the W density profiles were flat after a sawtooth crash, and become hollow in between sawtooth crashes when ECRH has been applied. It was also observed that a saturated kink mode was active in these conditions. We studied the effect of saturated kink like instabilities on the redistribution of W impurities. The kink was modeled as the sum of a simple analytical equilibrium (large aspect ratio, circular cross section) plus the perturbation produced by the kink. A numerical code that follows the exact trajectories of the impurity ions in the total fields and includes collisions was employed. The code is written in Cuda C and runs in Graphical Processing Units (GPUs), allowing simulations with a large number of particles with modest resources. Our simulations show that when the W ions have a thermal velocity distribution, the kink has no effect on the W density. When we consider the plasma rotation, the kink can affect the W density. When the average passing frequency of the W particles is similar to the frequency of the kink mode, the expulsion of W ions from the plasma core is maximum, and the W density shows a hollow structure. This could have implications for the mitigation of W accumulation.

Keywords: impurity transport, kink instability, tungsten accumulation, tungsten dynamics

Procedia PDF Downloads 158
1166 Light Car Assisted by PV Panels

Authors: Soufiane Benoumhani, Nadia Saifi, Boubekeur Dokkar, Mohamed Cherif Benzid

Abstract:

This work presents the design and simulation of electric equipment for a hybrid solar vehicle. The new drive train of this vehicle is a parallel hybrid system which means a vehicle driven by a great percentage of an internal combustion engine with 49.35 kW as maximal power and electric motor only as assistance when is needed. This assistance is carried out on the rear axle by a single electric motor of 7.22 kW as nominal power. The motor is driven by 12 batteries connecting in series, which are charged by three PV panels (300 W) installed on the roof and hood of the vehicle. The individual components are modeled and simulated by using the Matlab Simulink environment. The whole system is examined under different load conditions. The reduction of CO₂ emission is obtained by reducing fuel consumption. With the use of this hybrid system, fuel consumption can be reduced from 6.74 kg/h to 5.56 kg/h when the electric motor works at 100 % of its power. The net benefit of the system reaches 1.18 kg/h as fuel reduction at high values of power and torque.

Keywords: light car, hybrid system, PV panel, electric motor

Procedia PDF Downloads 101
1165 The Factors Constitute the Interaction between Teachers and Students: An Empirical Study at the Notion of Framing

Authors: Tien-Hui Chiang

Abstract:

The code theory, proposed by Basil Bernstein, indicates that framing can be viewed as the core element in constituting the phenomenon of cultural reproduction because it is able to regulate the transmission of pedagogical information. Strong framing increases the social relation boundary between a teacher and pupils, which obstructs information transmission, so that in order to improve underachieving students’ academic performances, teachers need to reduce to strength of framing. Weak framing enables them to transform academic knowledge into commonsense knowledge in daily life language. This study posits that most teachers would deliver strong framing due to their belief mainly confined within the aspect of instrumental rationality that deprives their critical minds. This situation could make them view the normal distribution bell curve of students’ academic performances as a natural outcome. In order to examine the interplay between framing, instrumental rationality and pedagogical action, questionnaires were completed by over 5,000 primary school teachers in Henan province, China, who were stratified sample. The statistical results show that most teachers employed psychological concepts to measure students’ academic performances and, in turn, educational inequity was legitimatized as a natural outcome in the efficiency-led approach. Such efficiency-led minds made them perform as the agent practicing the mechanism of social control and in turn sustaining the phenomenon of cultural reproduction.

Keywords: code, cultural reproduction, framing, instrumental rationality, social relation and interaction

Procedia PDF Downloads 135
1164 The Human Rights Code: Fundamental Rights as the Basis of Human-Robot Coexistence

Authors: Gergely G. Karacsony

Abstract:

Fundamental rights are the result of thousand years’ progress of legislation, adjudication and legal practice. They serve as the framework of peaceful cohabitation of people, protecting the individual from any abuse by the government or violation by other people. Artificial intelligence, however, is the development of the very recent past, being one of the most important prospects to the future. Artificial intelligence is now capable of communicating and performing actions the same way as humans; such acts are sometimes impossible to tell from actions performed by flesh-and-blood people. In a world, where human-robot interactions are more and more common, a new framework of peaceful cohabitation is to be found. Artificial intelligence, being able to take part in almost any kind of interaction where personal presence is not necessary without being recognized as a non-human actor, is now able to break the law, violate people’s rights, and disturb social peace in many other ways. Therefore, a code of peaceful coexistence is to be found or created. We should consider the issue, whether human rights can serve as the code of ethical and rightful conduct in the new era of artificial intelligence and human coexistence. In this paper, we will examine the applicability of fundamental rights to human-robot interactions as well as to the actions of artificial intelligence performed without human interaction whatsoever. Robot ethics has been a topic of discussion and debate of philosophy, ethics, computing, legal sciences and science fiction writing long before the first functional artificial intelligence has been introduced. Legal science and legislation have approached artificial intelligence from different angles, regulating different areas (e.g. data protection, telecommunications, copyright issues), but they are only chipping away at the mountain of legal issues concerning robotics. For a widely acceptable and permanent solution, a more general set of rules would be preferred to the detailed regulation of specific issues. We argue that human rights as recognized worldwide are able to be adapted to serve as a guideline and a common basis of coexistence of robots and humans. This solution has many virtues: people don’t need to adjust to a completely unknown set of standards, the system has proved itself to withstand the trials of time, legislation is easier, and the actions of non-human entities are more easily adjudicated within their own framework. In this paper we will examine the system of fundamental rights (as defined in the most widely accepted source, the 1966 UN Convention on Human Rights), and try to adapt each individual right to the actions of artificial intelligence actors; in each case we will examine the possible effects on the legal system and the society of such an approach, finally we also examine its effect on the IT industry.

Keywords: human rights, robot ethics, artificial intelligence and law, human-robot interaction

Procedia PDF Downloads 226
1163 Inhibition of α-Glucosidase and Xanthine Oxidase by Curcumin and Its Analogs

Authors: Jung-Feng Hsieh, Chu Ze Chen

Abstract:

Curcumin is the main active compound of turmeric that can inhibit the activities of α-glucosidase and xanthine oxidase (XO). α-Glucosidase and XO inhibitors are widely used to treat patients with diabetes mellitus and gout, respectively; therefore, the objective of this research was to evaluate the inhibitory activities of curcumin and its analogs against α-glucosidase and XO. Our results demonstrated that CM-F had the strongest antioxidant activity with a half-maximal effective concentration (EC50) of 9.39 ± 0.16 μM, which was superior to vitamin E (EC50=17.03 ± 0.09 μM). CM-F also exhibited potent inhibitory activity against XO with an IC50 value of 6.14 ± 0.38 μM and enzyme kinetic results revealed competitive inhibition of XO. We also found that CM-1 and CM-2 inhibited α-glucosidase with IC50 values of 21.06 ± 0.92 μM and 5.95 ± 0.09 μM, respectively, and kinetic studies indicated that both CM-1 and CM-2 are mixed competitive inhibitors of α-glucosidase. Furthermore, docking simulation identified five hydrogen bonds between XO and CM-F; however, only one and two hydrogen bonds are involved in CM-1 and CM-2 binding to α-glucosidase, respectively. Accordingly, curcumin and its analogs have the potential to be used in the treatment of patients with diabetes mellitus and gout.

Keywords: curcumin, α-glucosidase, inhibitor, xanthine oxidase

Procedia PDF Downloads 187
1162 Stability of Concrete Moment Resisting Frames in View of Current Codes Requirements

Authors: Mahmoud A. Mahmoud, Ashraf Osman

Abstract:

In this study, the different approaches currently followed by design codes to assess the stability of buildings utilizing concrete moment resisting frames structural system are evaluated. For such purpose, a parametric study was performed. It involved analyzing group of concrete moment resisting frames having different slenderness ratios (height/width ratios), designed for different lateral loads to vertical loads ratios and constructed using ordinary reinforced concrete and high strength concrete for stability check and overall buckling using code approaches and computer buckling analysis. The objectives were to examine the influence of such parameters that directly linked to frames’ lateral stiffness on the buildings’ stability and evaluates the code approach in view of buckling analysis results. Based on this study, it was concluded that, the most susceptible buildings to instability and magnification of second order effects are buildings having high aspect ratios (height/width ratio), having low lateral to vertical loads ratio and utilizing construction materials of high strength. In addition, the study showed that the instability limits imposed by codes are mainly mathematical to ensure reliable analysis not a physical ones and that they are in general conservative. Also, it has been shown that the upper limit set by one of the codes that second order moment for structural elements should be limited to 1.4 the first order moment is not justified, instead, the overall story check is more reliable.

Keywords: buckling, lateral stability, p-delta, second order

Procedia PDF Downloads 236
1161 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 68
1160 A Framework for Secure Information Flow Analysis in Web Applications

Authors: Ralph Adaimy, Wassim El-Hajj, Ghassen Ben Brahim, Hazem Hajj, Haidar Safa

Abstract:

Huge amounts of data and personal information are being sent to and retrieved from web applications on daily basis. Every application has its own confidentiality and integrity policies. Violating these policies can have broad negative impact on the involved company’s financial status, while enforcing them is very hard even for the developers with good security background. In this paper, we propose a framework that enforces security-by-construction in web applications. Minimal developer effort is required, in a sense that the developer only needs to annotate database attributes by a security class. The web application code is then converted into an intermediary representation, called Extended Program Dependence Graph (EPDG). Using the EPDG, the provided annotations are propagated to the application code and run against generic security enforcement rules that were carefully designed to detect insecure information flows as early as they occur. As a result, any violation in the data’s confidentiality or integrity policies is reported. As a proof of concept, two PHP web applications, Hotel Reservation and Auction, were used for testing and validation. The proposed system was able to catch all the existing insecure information flows at their source. Moreover and to highlight the simplicity of the suggested approaches vs. existing approaches, two professional web developers assessed the annotation tasks needed in the presented case studies and provided a very positive feedback on the simplicity of the annotation task.

Keywords: web applications security, secure information flow, program dependence graph, database annotation

Procedia PDF Downloads 452
1159 Evaluation of Prestressed Reinforced Concrete Slab Punching Shear Using Finite Element Method

Authors: Zhi Zhang, Liling Cao, Seyedbabak Momenzadeh, Lisa Davey

Abstract:

Reinforced concrete (RC) flat slab-column systems are commonly used in residential or office buildings, as the flat slab provides efficient clearance resulting in more stories at a given height than regular reinforced concrete beam-slab system. Punching shear of slab-column joints is a critical component of two-way reinforced concrete flat slab design. The unbalanced moment at the joint is transferred via slab moment and shear forces. ACI 318 provides an equation to evaluate the punching shear under the design load. It is important to note that the design code considers gravity and environmental load when considering the design load combinations, while it does not consider the effect from differential foundation settlement, which may be a governing load condition for the slab design. This paper describes how prestressed reinforced concrete slab punching shear is evaluated based on ACI 318 provisions and finite element analysis. A prestressed reinforced concrete slab under differential settlements is studied using the finite element modeling methodology. The punching shear check equation is explained. The methodology to extract data for punching shear check from the finite element model is described and correlated with the corresponding code provisions. The study indicates that the finite element analysis results should be carefully reviewed and processed in order to perform accurate punching shear evaluation. Conclusions are made based on the case studies to help engineers understand the punching shear behavior in prestressed and non-prestressed reinforced concrete slabs.

Keywords: differential settlement, finite element model, prestressed reinforced concrete slab, punching shear

Procedia PDF Downloads 110
1158 The Construction of the Bridge between Mrs Dalloway and to the Lighthouse: The Combination of Codes and Metaphors in the Structuring of the Plot in the Work of Virginia Woolf

Authors: María Rosa Mucci

Abstract:

Tzvetan Todorov (1971) designs a model of narrative transformation where the plot is constituted by difference and resemblance. This binary opposition is a synthesis of a central figure within narrative discourse: metaphor. Narrative operates as a metaphor since it combines different actions through similarities within a common plot. However, it sounds paradoxical that metonymy and not metaphor should be the key figure within the narrative. It is a metonymy that keeps the movement of actions within the story through syntagmatic relations. By the same token, this articulation of verbs makes it possible for the reader to engage in a dynamic interaction with the text, responding to the plot and mediating meanings with the contradictory external world. As Roland Barthes (1957) points out, there are two codes that are irreversible within the process: the codes of actions and the codes of enigmas. Virginia Woolf constructs her plots through a process of symbolism; a scene is always enduring, not only because it stands for something else but also because it connotes it. The reader is forced to elaborate the meaning at a mythological level beyond the lines. In this research, we follow a qualitative content analysis to code language through the proairetic (actions) and hermeneutic (enigmas) codes in terms of Barthes. There are two novels in particular that engage the reader in this process of construction: Mrs Dalloway (1925) and To the Lighthouse (1927). The bridge from the first to the second brings memories of childhood, allowing for the discovery of these enigmas hidden between the lines. What survives? Who survives? It is the reader's task to unravel these codes and rethink this dialogue between plot and reader to contribute to the predominance of texts and the textuality of narratives.

Keywords: metonymy, code, metaphor, myth, textuality

Procedia PDF Downloads 31
1157 Numerical Simulation of Free Surface Water Wave for the Flow Around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method

Authors: Omar Imine, Mohammed Aounallah, Mustapha Belkadi

Abstract:

Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation, a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of the fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRICscheme for VOF discretization. The results obtained compare well with the experimental data.

Keywords: free surface flows, breaking waves, boundary layer, Wigley hull, volume of fluid

Procedia PDF Downloads 357
1156 Statistical Modeling for Permeabilization of a Novel Yeast Isolate for β-Galactosidase Activity Using Organic Solvents

Authors: Shweta Kumari, Parmjit S. Panesar, Manab B. Bera

Abstract:

The hydrolysis of lactose using β-galactosidase is one of the most promising biotechnological applications, which has wide range of potential applications in food processing industries. However, due to intracellular location of the yeast enzyme, and expensive extraction methods, the industrial applications of enzymatic hydrolysis processes are being hampered. The use of permeabilization technique can help to overcome the problems associated with enzyme extraction and purification of yeast cells and to develop the economically viable process for the utilization of whole cell biocatalysts in food industries. In the present investigation, standardization of permeabilization process of novel yeast isolate was carried out using a statistical model approach known as Response Surface Methodology (RSM) to achieve maximal b-galactosidase activity. The optimum operating conditions for permeabilization process for optimal β-galactosidase activity obtained by RSM were 1:1 ratio of toluene (25%, v/v) and ethanol (50%, v/v), 25.0 oC temperature and treatment time of 12 min, which displayed enzyme activity of 1.71 IU /mg DW.

Keywords: β-galactosidase, optimization, permeabilization, response surface methodology, yeast

Procedia PDF Downloads 235
1155 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings

Authors: Youlu Huang, Huanjun Jiang

Abstract:

A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.

Keywords: old buildings, adding story, seismic strengthening, seismic performance

Procedia PDF Downloads 112
1154 Audit Examining Maternity Assessment Suite Triage Compliance with Birmingham Symptom Specific Obstetric Triage System in a London Teaching Hospital

Authors: Sarah Atalla, Shubham Gupta, Kim Alipio, Tanya Maric

Abstract:

Background: Chelsea and Westminster Hospital have introduced the Birmingham Symptom Specific Obstetric Triage System (BSOTS) for patients who present acutely to the Maternity Assessment Suite (MAS) to prioritise care by urgency. The primary objective was to evaluate whether BSOTS was used appropriately to assess patients (defined as a 90% threshold). The secondary objective was to assess whether patients were seen within their designated triaged timeframe (defined as a 90% threshold). Methodology: MAS records were retrospectively reviewed for a randomly selected one-week period of data from 2020 (21/09/2020 - 27/09/2020). 189 patients presented to MAS during this time. Data were collected on the presenting complaint, time of attendance (divided into four time categories), and triage colour code for the urgency of a review by a doctor (red: immediately, orange: within 15 minutes, yellow: within 1 hour, green: within 4 hours). The number of triage waiting times that were breached and the outcome of the attendance was noted. Results: 49% of patients presenting to MAS during this time period were triaged, which therefore did not meet the 90% target. 67% of patients who were triaged were seen within their allocated timeframe as designated by their triage colour code, which therefore did not meet the 90% target. The most frequent reason for patient attendance was reduced fetal movements (30.5% of attendances). The busiest time of day (when most patients presented) was between 06:01-12:00, and this was also when the highest number of patients were not triaged (26 patients or 54% of patients presenting in this time category). The most used triage category (59%) was the green colour code (to be seen by a doctor within 4 hours), followed by orange (24%), yellow (14%), and red (3%). 45% of triaged patients were admitted, whilst 55% were discharged. 62% of patients allocated to the green triage category were discharged, as compared to 56% of yellow category patients, 27% of orange category patients, and 50% of red category patients. The time of patient presentation to the hospital was also associated with the level of urgency and outcome. Patients presenting from 12:01 to 18:00 were more likely to be discharged (72% discharged) compared to 00:01-06:00 where only 12.5% of patients were discharged. Conclusion: The triage system for assessing the urgency of acutely presenting obstetric patients is only being effectively utilised for 49% of patients. There is potential for enhancing the employment of the triage system to enable further efficiency and boost the promotion of patient safety. It is noted that MAS was busiest at 06:01 - 12:00 when there was also the highest number of non-triaged patients – this highlights some areas where we can improve, including higher levels of staffing, better use of BSOTS to triage patients, and patient education.

Keywords: birmingham, BSOTS, maternal, obstetric, pregnancy, specific, symptom, triage

Procedia PDF Downloads 85