Search results for: Chinese code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2392

Search results for: Chinese code

532 Child Homicide Victimization and Community Context: A Research Note

Authors: Bohsiu Wu

Abstract:

Among serious crimes, child homicide is a rather rare event. However, the killing of children stirs up a special type of emotion in society that pales other criminal acts. This study examines the relevancy of three possible community-level explanations for child homicide: social deprivation, female empowerment, and social isolation. The social deprivation hypothesis posits that child homicide results from lack of resources in communities. The female empowerment hypothesis argues that a higher female status translates into a higher level of capability to prevent child homicide. Finally, the social isolation hypothesis regards child homicide as a result of lack of social connectivity. Child homicide data, aggregated by US postal ZIP codes in California from 1990 to 1999, were analyzed with a negative binomial regression. The results of the negative binomial analysis demonstrate that social deprivation is the most salient and consistent predictor among all other factors in explaining child homicide victimization at the ZIP-code level. Both social isolation and female labor force participation are weak predictors of child homicide victimization across communities. Further, results from the negative binomial regression show that it is the communities with a higher, not lower, degree of female labor force participation that are associated with a higher count of child homicide. It is possible that poor communities with a higher level of female employment have a lesser capacity to provide the necessary care and protection for the children. Policies aiming at reducing social deprivation and strengthening female empowerment possess the potential to reduce child homicide in the community.

Keywords: child homicide, deprivation, empowerment, isolation

Procedia PDF Downloads 190
531 Atmospheric Dispersion Modeling for a Hypothetical Accidental Release from the 3 MW TRIGA Research Reactor of Bangladesh

Authors: G. R. Khan, Sadia Mahjabin, A. S. Mollah, M. R. Mawla

Abstract:

Atmospheric dispersion modeling is significant for any nuclear facilities in the country to predict the impact of radiological doses on environment as well as human health. That is why to ensure safety of workers and population at plant site; Atmospheric dispersion modeling and radiation dose calculations were carried out for a hypothetical accidental release of airborne radionuclide from the 3 MW TRIGA research reactor of Savar, Bangladesh. It is designed with reactor core which consists of 100 fuel elements(1.82245 cm in diameter and 38.1 cm in length), arranged in an annular corefor steady-state and square wave power level of 3 MW (thermal) and for pulsing with maximum power level of 860MWth.The fuel is in the form of a uniform mixture of 20% uranium and 80% zirconium hydride. Total effective doses (TEDs) to the public at various downwind distances were evaluated with a health physics computer code “HotSpot” developed by Lawrence Livermore National Laboratory, USA. The doses were estimated at different Pasquill stability classes (categories A-F) with site-specific averaged meteorological conditions. The meteorological data, such as, average wind speed, frequency distribution of wind direction, etc. have also been analyzed based on the data collected near the reactor site. The results of effective doses obtained remain within the recommended maximum effective dose.

Keywords: accidental release, dispersion modeling, total effective dose, TRIGA

Procedia PDF Downloads 132
530 Criminal Responsibility of Minors in Russia: The Age of Liability and Penalties

Authors: Natalia Selezneva

Abstract:

The level of crime depends on a number of factors, such as political and economic instability, social inequality and ineffective legislation. A special place in the overall level of crime takes juvenile delinquency. United Nations Standard Minimum developed rules for the administration of juvenile justice (The Beijing Rules), in order to ensure the rights of juvenile offenders under the various legal systems. Most countries support these recommendations, and Russia is no exception. Russia's criminal code establishes the minimum age of criminal liability; types of crimes for which the possible involvement of minors to justice; punishment; sentencing and execution of punishment for minors. However, these provisions cause heated debates in the scientific literature. The high level of juvenile crime indicates the ineffectiveness of legal regulation of criminal liability of minors. In order to ensure compliance with international standards require new and modern approaches to improve national legislation and practice of its application. Achieving this goal will be achieved through the following tasks: 1. Create sub-branches of law regulating the legal status of minors; 2. Improving the types of penalties; 3. The possibility of using alternative measures; 4. The introduction of the procedure of extrajudicial settlement of the conflict. The criminal law of each country depends on the historical, national and cultural characteristics. The development of the Russian legislation taking into account international experience is extremely essential and will be a new stage in the formation of a legal state, especially in the sphere of protection of the rights of juvenile offenders.

Keywords: criminal law, juvenile offender, punishment, the age of criminal responsibility

Procedia PDF Downloads 529
529 An Empirical Study on the Integration of Listening and Speaking Activities with Writing Instruction for Middles School English Language Learners

Authors: Xueyan Hu, Liwen Chen, Weilin He, Sujie Peng

Abstract:

Writing is an important but challenging skill For English language learners. Due to the small amount of time allocated for writing classes at schools, students have relatively few opportunities to practice writing in the classroom. While the practice of integrating listening and speaking activates with writing instruction has been used for adult English language learners, its application for young English learners has seldom been examined due to the challenge of listening and speaking activities for young English language learners. The study attempted to integrating integrating listening and speaking activities with writing instruction for middle school English language learners so as to improving their writing achievements and writing abilities in terms of the word use, coherence, and complexity in their writings. Guided by Gagne's information processing learning theory and memetics, this study conducted a 8-week writing instruction with an experimental class (n=44) and a control class (n=48) . Students in the experimental class participated in a series of listening and retelling activities about a writing sample the teacher used for writing instruction during each period of writing class. Students in the control class were taught traditionally with teachers’ direction instruction using the writing sample. Using the ANCOVA analysis of the scores of students’ writing, word-use, Chinese-English translation and the text structure, this study showed that the experimental writing instruction can significantly improve students’ writing performance. Compared with the students in the control class, the students in experimental class had significant better performance in word use and complexity in their essays. This study provides useful enlightenment for the teaching of English writing for middle school English language learners. Teachers can skillfully use information technology to integrate listening, speaking, and writing teaching, considering students’ language input and output. Teachers need to select suitable and excellent composition templates for students to ensure their high-quality language input.

Keywords: wring instruction, retelling, English language learners, listening and speaking

Procedia PDF Downloads 76
528 Security Analysis of Mod. S Transponder Technology and Attack Examples

Authors: M. Rutkowski, J. Cwiklak, M. Grzegorzewski, M. Adamski

Abstract:

All class A Airplanes have to be equipped with Mod. S transponder for ATC surveillance purposes. This technology was designed to provide a robust and dependable solution to localize, identify and exchange data with the airplane. The purpose of this paper is to analyze potential hazards that are a result of lack of any security or encryption on a design level. Secondary Surveillance Radars rely on an active response from an airplane. SSR radar installation is broadcasting a directional interrogation signal to the planes in range on 1030MHz frequency with DPSK modulation. If the interrogation is correctly received by the transponder located on the plane, a proper answer is sent on 1090MHz with PPM modulation containing plane’s SQUAWK, barometric altitude, GPS coordinates and 24bit unique address code. This technology does not use any kind of encryption. All of the specifications from the previous chapter can be found easily on the internet. Since there is no encryption or security measure to ensure the credibility of the sender and message, it is highly hazardous to use such technology to ensure the safety of the air traffic. The only thing that identifies the airplane is the 24-bit unique address. Most of the planes have been sniffed by aviation enthusiasts and cataloged in web databases. In the moment of writing this article, The PoFung Technologies has announced that they are planning to release all band SDR transceiver – this device would be more than enough to build your own Mod. S Transponder. With fake transponder, a potential terrorist can identify as a different airplane. By replacing the transponder in a poorly controlled airspace, hijackers can enter another airspace identifying themselves as another plane and land in the desired area.

Keywords: flight safety, hijack, mod S transponder, security analysis

Procedia PDF Downloads 293
527 Evaluation of Immune Checkpoint Inhibitors in Cancer Therapy

Authors: Mir Mohammad Reza Hosseini

Abstract:

In new years immune checkpoint inhibitors have gathered care as being one of the greatest talented kinds of immunotherapy on the prospect. There has been a specific emphasis on the immune checkpoint molecules, cytotoxic T-lymphocyte antigen-4 (CTLA-4) and programmed cell death protein 1 (PD-1). In 2011, ipilimumab, the primary antibody obstructive an immune checkpoint (CTLA4) was authorized. It is now documented that recognized tumors have many devices of overpowering the antitumor immune response, counting manufacture of repressive cytokines, staffing of immunosuppressive immune cells, and upregulation of coinhibitory receptors recognized as immune checkpoints. This was fast followed by the growth of monoclonal antibodies directing PD1 (pembrolizumab and nivolumab) and PDL1 (atezolizumab and durvalumab). Anti-PD1/PDL1 antibodies have developed some of the greatest extensively set anticancer therapies. We also compare and difference their present place in cancer therapy and designs of immune-related toxicities and deliberate the role of dual immune checkpoint inhibition and plans for the organization of immune-related opposing proceedings. In this review, the employed code and present growth of numerous immune checkpoint inhibitors are abridged, while the communicating device and new development of Immune checkpoint inhibitors in cancer therapy-based synergistic therapies with additional immunotherapy, chemotherapy, phototherapy, and radiotherapy in important and clinical educations in the historical 5 years are portrayed and tinted. Lastly, we disapprovingly measure these methods and effort to find their fortes and faintness based on pre-clinical and clinical information.

Keywords: checkpoint, cancer therapy, PD-1, PDL-1, CTLA4, immunosuppressive

Procedia PDF Downloads 161
526 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 21
525 Thermo-Mechanical Behavior of Steel-Wood Connections of Wooden Structures Under the Effect of a Fire

Authors: Ahmed Alagha, Belkacem Lamri, Abdelhak Kada.

Abstract:

Steel-wood assemblies often have complex geometric configurations whose overall behavior under the effect of a fire is conditioned by the thermal response, by combining the two materials steel and wood, whose thermal characteristics are greatly influenced by high temperatures. The objective of this work is to study the thermal behavior of a steel-wood connection, with or without insulating material, subjected to an ISO834 standard fire model. The analysis is developed by the analytical approach using the Eurocode, and numerically, by the finite element method, through the ANSYS calculation code. The design of the connections is evaluated at room temperature taking the cases of single shear and double shear. The thermal behavior of the connections is simulated in transient state while taking into account the modes of heat transfer by convection and by radiation. The variation of temperature as a function of time is evaluated in different positions of the connections while talking about the heat produced and the formation of the carbon layer. The results relate to the temperature distributions in the connection elements as a function of the duration of the fire. The results of the thermal analysis show that the temperature increases rapidly and reaches more than 260 °C in the steel material for an hour of exposure to fire. The temperature development in wood material is different from that in steel because of its thermal properties. Wood heats up on the outside and burns, its surface can reach very high temperatures in points on the surface.

Keywords: Eurocode 5, finite elements, ISO834, simple shear, thermal behaviour, wood-steel connection

Procedia PDF Downloads 79
524 Optimization of the Mechanical Performance of Fused Filament Fabrication Parts

Authors: Iván Rivet, Narges Dialami, Miguel Cervera, Michele Chiumenti

Abstract:

Process parameters in Additive Manufacturing (AM) play a critical role in the mechanical performance of the final component. In order to find the input configuration that guarantees the optimal performance of the printed part, the process-performance relationship must be found. Fused Filament Fabrication (FFF) is the selected demonstrative AM technology due to its great popularity in the industrial manufacturing world. A material model that considers the different printing patterns present in a FFF part is used. A voxelized mesh is built from the manufacturing toolpaths described in the G-Code file. An Adaptive Mesh Refinement (AMR) based on the octree strategy is used in order to reduce the complexity of the mesh while maintaining its accuracy. High-fidelity and cost-efficient Finite Element (FE) simulations are performed and the influence of key process parameters in the mechanical performance of the component is analyzed. A robust optimization process based on appropriate failure criteria is developed to find the printing direction that leads to the optimal mechanical performance of the component. The Tsai-Wu failure criterion is implemented due to the orthotropy and heterogeneity constitutive nature of FFF components and because of the differences between the strengths in tension and compression. The optimization loop implements a modified version of an Anomaly Detection (AD) algorithm and uses the computed metrics to obtain the optimal printing direction. The developed methodology is verified with a case study on an industrial demonstrator.

Keywords: additive manufacturing, optimization, printing direction, mechanical performance, voxelization

Procedia PDF Downloads 56
523 Optimization of Fused Deposition Modeling 3D Printing Process via Preprocess Calibration Routine Using Low-Cost Thermal Sensing

Authors: Raz Flieshman, Adam Michael Altenbuchner, Jörg Krüger

Abstract:

This paper presents an approach to optimizing the Fused Deposition Modeling (FDM) 3D printing process through a preprocess calibration routine of printing parameters. The core of this method involves the use of a low-cost thermal sensor capable of measuring tempera-tures within the range of -20 to 500 degrees Celsius for detailed process observation. The calibration process is conducted by printing a predetermined path while varying the process parameters through machine instructions (g-code). This enables the extraction of critical thermal, dimensional, and surface properties along the printed path. The calibration routine utilizes computer vision models to extract features and metrics from the thermal images, in-cluding temperature distribution, layer adhesion quality, surface roughness, and dimension-al accuracy and consistency. These extracted properties are then analyzed to optimize the process parameters to achieve the desired qualities of the printed material. A significant benefit of this calibration method is its potential to create printing parameter profiles for new polymer and composite materials, thereby enhancing the versatility and application range of FDM 3D printing. The proposed method demonstrates significant potential in enhancing the precision and reliability of FDM 3D printing, making it a valuable contribution to the field of additive manufacturing.

Keywords: FDM 3D printing, preprocess calibration, thermal sensor, process optimization, additive manufacturing, computer vision, material profiles

Procedia PDF Downloads 29
522 Impact of Audit Committee on Real Earnings Management: Cases of Netherlands

Authors: Sana Masmoudi Mardassi, Yosra Makni Fourati

Abstract:

Regulators highlight the importance of the Audit Committee (AC) as a key internal corporate governance mechanism. One of the most important roles of this committee is to oversee the financial reporting process. The purpose of this paper is to examine the link between the characteristics of an audit committee and the financial reporting quality by investigating whether the characteristics of audit committees are associated with improved financial reporting quality, especially the Real Earnings Management. In the current study, a panel data from 80 nonfinancial companies listed on the Amsterdam Stock Exchange during the period between 2010 and 2017 were used. To measure audit committee characteristics, four proxies have been used, specifically, audit committee independence, financial expertise, gender diversity and AC meetings. For this research, a linear regression model was used to identify the influence of a set of board characteristics of the audit committee on real earnings management after controlling for firm audit committee size, leverage, size, loss, growth and board size. This research provides empirical evidence of the association between audit committee independence, financial expertise, gender diversity and meetings and Real Earnings Management (REM) as a proxy of financial reporting quality. The study finds that independence and AC Gender diversity are strongly related to financial reporting quality. In fact, these two characteristics constrain REM. The results also suggest that AC- financial expertise reduces to some extent, the likelihood of engaging in REM. These conclusions provide support then to the audit committee requirement under the Dutch Corporate Governance Code rules regarding gender diversity and AC meetings.

Keywords: audit committee, financial expertise, independence, real earnings management

Procedia PDF Downloads 160
521 Earthquake Hazards in Manipur: Casual Factors and Remedial Measures

Authors: Kangujam Monika, Kiranbala Devi Thokchom, Soibam Sandhyarani Devi

Abstract:

Earthquake is a major natural hazard in India. Manipur, located in the North-Eastern Region of India, is one of the most affected location in the region prone to earthquakes since it lies in an area where Indian and Eurasian tectonic plates meet and is in seismic Zone V which is the most severe intensity zone, according to IS Code. Some recent earthquakes recorded in Manipur are M 6.7 epicenter at Tamenglong (January 4, 2016), M 5.2 epicenter at Churachandpur (February 24, 2017) and most recent M 4.4 epicenter at Thoubal (June 19, 2017). In these recent earthquakes, some houses and buildings were damaged, landslides were also occurred. A field study was carried out. An overview of the various causal factors involved in triggering of earthquake in Manipur has been discussed. It is found that improper planning, poor design, negligence, structural irregularities, poor quality materials, construction of foundation without proper site soil investigation and non-implementation of remedial measures, etc., are possibly the main causal factors for damage in Manipur during earthquake. The study also suggests, though the proper design of structure and foundation along with soil investigation, ground improvement methods, use of modern techniques of construction, counseling with engineer, mass awareness, etc., might be effective solution to control the hazard in many locations. An overview on the analysis pertaining to earthquake in Manipur together with on-going detailed site specific geotechnical investigation were presented.

Keywords: Manipur, earthquake, hazard, structure, soil

Procedia PDF Downloads 205
520 Serial Position Curves under Compressively Expanding and Contracting Schedules of Presentation

Authors: Priya Varma, Denis John McKeown

Abstract:

Psychological time, unlike physical time, is believed to be ‘compressive’ in the sense that the mental representations of a series of events may be internally arranged with ever decreasing inter-event spacing (looking back from the most recently encoded event). If this is true, the record within immediate memory of recent events is severely temporally distorted. Although this notion of temporal distortion of the memory record is captured within some theoretical accounts of human forgetting, notably temporal distinctiveness accounts, the way in which the fundamental nature of the distortion underpins memory and forgetting broadly is barely recognised or at least directly investigated. Our intention here was to manipulate the spacing of items for recall in order to ‘reverse’ this supposed natural compression within the encoding of the items. In Experiment 1 three schedules of presentation (expanding, contracting and fixed irregular temporal spacing) were created using logarithmic spacing of the words for both free and serial recall conditions. The results of recall of lists of 7 words showed statistically significant benefits of temporal isolation, and more excitingly the contracting word series (which we may think of as reversing the natural compression within the mental representation of the word list) showed best performance. Experiment 2 tested for effects of active verbal rehearsal in the recall task; this reduced but did not remove the benefits of our temporal scheduling manipulation. Finally, a third experiment used the same design but with Chinese characters as memoranda, in a further attempt to subvert possible verbal maintenance of items. One change to the design here was to introduce a probe item following the sequence of items and record response times to this probe. Together the outcomes of the experiments broadly support the notion of temporal compression within immediate memory.

Keywords: memory, serial position curves, temporal isolation, temporal schedules

Procedia PDF Downloads 210
519 Legal Judgment Prediction through Indictments via Data Visualization in Chinese

Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun

Abstract:

Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.

Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization

Procedia PDF Downloads 117
518 The Culture of Journal Writing among Manobo Senior High School Students

Authors: Jessevel Montes

Abstract:

This study explored on the culture of journal writing among the Senior High School Manobo students. The purpose of this qualitative morpho-semantic and syntactic study was to discover the morphological, semantic, and syntactic features of the written output through morphological, semantic, and syntactic categories present in their journal writings. Also, beliefs and practices embedded in the norms, values, and ideologies were identified. The study was conducted among the Manobo students in the Senior High Schools of Central Mindanao, particularly in the Division of North Cotabato. Findings revealed that morphologically, the features that flourished are the following: subject-verb concordance, tenses, pronouns, prepositions, articles, and the use of adjectives. Semantically, the features are the following: word choice, idiomatic expression, borrowing, and vernacular. Syntactically, the features are the types of sentences according to structure and function; and the dominance of code switching and run-on sentences. Lastly, as to the beliefs and practices embedded in the norms, values, and ideologies of their journal writing, the major themes are: valuing education, family, and friends as treasure, preservation of culture, and emancipation from the bondage of poverty. This study has shed light on the writing capabilities and weaknesses of the Manobo students when it comes to English language. Further, such an insight into language learning problems is useful to teachers because it provides information on common trouble-spots in language learning, which can be used in the preparation of effective teaching materials.

Keywords: applied linguistics, culture, morpho-semantic and syntactic analysis, Manobo Senior High School, Philippines

Procedia PDF Downloads 117
517 Modelling Interactions between Saturated and Unsaturated Zones by Hydrus 1D, Plain of Kairouan, Central Tunisia

Authors: Mariem Saadi, Sabri Kanzari, Adel Zghibi

Abstract:

In semi-arid areas like the Kairouan region, the constant irrigation with saline water and the overuse of groundwater resources, soils and aquifers salinization has become an increasing concern. In this study, a methodology has been developed to evaluate the groundwater contamination risk based on the unsaturated zone hydraulic properties. Two soil profiles with different ranges of salinity, one located in the north of the plain and another one in the south of plain (each 30 m deep) and both characterized by direct recharge of the aquifer were chosen. Simulations were conducted with Hydrus-1D code using measured precipitation data for the period 1998-2003 and calculated evapotranspiration for both chosen profiles. Four combinations of initial conditions of water content and salt concentration were used for the simulation process in order to find the best match between simulated and measured values. The success of the calibration of Hydrus-1D allowed the investigation of some scenarios in order to assess the contamination risk under different natural conditions. The aquifer risk contamination is related to the natural conditions where it increased while facing climate change and temperature increase and decreased in the presence of a clay layer in the unsaturated zone. Hydrus-1D was a useful tool to predict the groundwater level and quality in the case of a direct recharge and in the absence of any information related to the soil layers except for the texture.

Keywords: Hydrus-1D, Kairouan, salinization, semi-arid region, solute transport, unsaturated zone

Procedia PDF Downloads 177
516 The Formulation of the Mecelle and Other Codified Laws in the Ottoman Empire: Transformation Overturning the Sharia Principles

Authors: Tianqi Yin

Abstract:

The sharia had been the legislative basis in the Ottoman Empire since its emergence. The authority of sharia was superlative in the Islamic society compared to the power of the sulta, the nominal ruler of the nation, regulating essentially every aspect of people’s lives according to an ethical code. In modernity, however, as European sovereignty employed forces to re-engineer the Islamic world to make it more like their own, a society ruled by a state, the Ottoman legislation system encountered a great challenge of adopting codified laws to replace sharia with the formulation of the Mecelle being a prominent case. Interpretations of this transformation have been contentious, with the key debate revolving around whether these codified laws are authentic representations of sharia or alien legal formulations authorized by the modern nation-state under heavy European colonial influence. Because of the difference in methodology of the diverse theories, challenges toward having a universal conclusion on this issue remain. This paper argues that the formulation of the Mecelle and other codified laws is a discontinuity of sharia due to European modernity’s influence and that the emphasis on elements of Islamic laws is a tactic employed to promote this process. These codified laws signals a complete social transformation from the Islamic society ruled by the sharia to a replication of the European society that is ruled by a comprehensive ruling system of the modern state. In addition to advancing the discussion on the characterization of the codification movement in the Ottoman Empire in modernity, the research also promotes the determination of the nature of the modern codification movement globally.

Keywords: codification, mecelle, modernity, sharia, ottoman empire

Procedia PDF Downloads 86
515 Timber Urbanism: Assessing the Carbon Footprint of Mass-Timber, Steel, and Concrete Structural Prototypes for Peri-Urban Densification in the Hudson Valley’s Urban Fringe

Authors: Eleni Stefania Kalapoda

Abstract:

The current fossil-fuel based urbanization pattern and the estimated human population growth are increasing the environmental footprint on our planet’s precious resources. To mitigate the estimated skyrocketing in greenhouse gas emissions associated with the construction of new cities and infrastructure over the next 50 years, we need a radical rethink in our approach to construction to deliver a net zero built environment. This paper assesses the carbon footprint of a mass-timber, a steel, and a concrete structural alternative for peri-urban densification in the Hudson Valley's urban fringe, along with examining the updated policy and the building code adjustments that support synergies between timber construction in city making and sustainable management of timber forests. By quantifying the carbon footprint of a structural prototype for four different material assemblies—a concrete (post-tensioned), a mass timber, a steel (composite), and a hybrid (timber/steel/concrete) assembly applicable to the three updated building typologies of the IBC 2021 (Type IV-A, Type IV-B, Type IV-C) that range between a nine to eighteen-story structure alternative—and scaling-up that structural prototype to the size of a neighborhood district, the paper presents a quantitative and a qualitative approach for a forest-based construction economy as well as a resilient and a more just supply chain framework that ensures the wellbeing of both the forest and its inhabitants.

Keywords: mass-timber innovation, concrete structure, carbon footprint, densification

Procedia PDF Downloads 100
514 Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications

Authors: Jongbae Lee, Seongsoo Lee

Abstract:

Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates.

Keywords: digital sensor interface, SAE J2716, SENT, verilog HDL

Procedia PDF Downloads 292
513 A Study on the Urban Design Path of Historical Block in the Ancient City of Suzhou, China

Authors: Yan Wang, Wei Wu

Abstract:

In recent years, with the gradual change of Chinese urban development mode from 'incremental development' to 'stock-based renewal', the urban design method of ‘grand scene’ in the past could only cope with the planning and construction of incremental spaces such as new towns and new districts, while the problems involved in the renewal of the stock lands such as historic blocks of ancient cities are more complex. 'Simplified' large-scale demolition and construction may lead to the damage of the ancient city's texture and the overall cultural atmosphere; thus it is necessary to re-explore the urban design path of historical blocks in the conservation context of the ancient city. Through the study of the cultural context of the ancient city of Suzhou in China and the interpretation of its current characteristics, this paper explores the methods and paths for the renewal of historical and cultural blocks in the ancient city. It takes No. 12 and No. 13 historical blocks in the ancient city of Suzhou as examples, coordinating the spatial layout and the landscape and shaping the regional characteristics to improve the quality of the ancient city's life. This paper analyses the idea of conservation and regeneration from the aspects of culture, life, business form, and transport. Guided by the planning concept of ‘block repair and cultural infiltration’, it puts forward the urban design path of ‘conservation priority, activation and utilization, organic renewal and strengthening guidance’, with a view to continuing the cultural context and stimulating the vitality of ancient city, so as to realize the integration of history, modernity, space and culture. As a rare research on urban design in the scope of Suzhou ancient city, the paper expects to explore the concepts and methods of urban design for the historic blocks on the basis of the conservation of the history, space, and culture and provides a reference for other similar types of urban construction.

Keywords: historical block, Suzhou ancient city, stock-based renewal, urban design

Procedia PDF Downloads 140
512 Simulation of the Collimator Plug Design for Prompt-Gamma Activation Analysis in the IEA-R1 Nuclear Reactor

Authors: Carlos G. Santos, Frederico A. Genezini, A. P. Dos Santos, H. Yorivaz, P. T. D. Siqueira

Abstract:

The Prompt-Gamma Activation Analysis (PGAA) is a valuable technique for investigating the elemental composition of various samples. However, the installation of a PGAA system entails specific conditions such as filtering the neutron beam according to the target and providing adequate shielding for both users and detectors. These requirements incur substantial costs, exceeding $100,000, including manpower. Nevertheless, a cost-effective approach involves leveraging an existing neutron beam facility to create a hybrid system integrating PGAA and Neutron Tomography (NT). The IEA-R1 nuclear reactor at IPEN/USP possesses an NT facility with suitable conditions for adapting and implementing a PGAA device. The NT facility offers a thermal flux slightly colder and provides shielding for user protection. The key additional requirement involves designing detector shielding to mitigate high gamma ray background and safeguard the HPGe detector from neutron-induced damage. This study employs Monte Carlo simulations with the MCNP6 code to optimize the collimator plug for PGAA within the IEA-R1 NT facility. Three collimator models are proposed and simulated to assess their effectiveness in shielding gamma and neutron radiation from nucleon fission. The aim is to achieve a focused prompt-gamma signal while shielding ambient gamma radiation. The simulation results indicate that one of the proposed designs is particularly suitable for the PGAA-NT hybrid system.

Keywords: MCNP6.1, neutron, prompt-gamma ray, prompt-gamma activation analysis

Procedia PDF Downloads 60
511 International Classification of Primary Care as a Reference for Coding the Demand for Care in Primary Health Care

Authors: Souhir Chelly, Chahida Harizi, Aicha Hechaichi, Sihem Aissaoui, Leila Ben Ayed, Maha Bergaoui, Mohamed Kouni Chahed

Abstract:

Introduction: The International Classification of Primary Care (ICPC) is part of the morbidity classification system. It had 17 chapters, and each is coded by an alphanumeric code: the letter corresponds to the chapter, the number to a paragraph in the chapter. The objective of this study is to show the utility of this classification in the coding of the reasons for demand for care in Primary health care (PHC), its advantages and limits. Methods: This is a cross-sectional descriptive study conducted in 4 PHC in Ariana district. Data on the demand for care during 2 days in the same week were collected. The coding of the information was done according to the CISP. The data was entered and analyzed by the EPI Info 7 software. Results: A total of 523 demands for care were investigated. The patients who came for the consultation are predominantly female (62.72%). Most of the consultants are young with an average age of 35 ± 26 years. In the ICPC, there are 7 rubrics: 'infections' is the most common reason with 49.9%, 'other diagnoses' with 40.2%, 'symptoms and complaints' with 5.5%, 'trauma' with 2.1%, 'procedures' with 2.1% and 'neoplasm' with 0.3%. The main advantage of the ICPC is the fact of being a standardized tool. It is very suitable for classification of the reasons for demand for care in PHC according to their specificity, capacity to be used in a computerized medical file of the PHC. Its current limitations are related to the difficulty of classification of some reasons for demand for care. Conclusion: The ICPC has been developed to provide healthcare with a coding reference that takes into account their specificity. The CIM is in its 10th revision; it would gain from revision to revision to be more efficient to be generalized and used by the teams of PHC.

Keywords: international classification of primary care, medical file, primary health care, Tunisia

Procedia PDF Downloads 259
510 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model

Procedia PDF Downloads 403
509 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach

Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane

Abstract:

The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.

Keywords: aluminum alloy, ballistic behavior, failure criterion, numerical simulation

Procedia PDF Downloads 306
508 Utilizing Minecraft Java Edition for the Application of Fire Disaster Procedures to Establish Fire Disaster Readiness for Grade 12 STEM students of DLSU-IS

Authors: Aravella Flores, Jose Rafael E. Sotelo, Luis Romulus Phillippe R. Javier, Josh Christian V. Nunez

Abstract:

This study focuses on analyzing the performance of Grade 12 STEM students of De La Salle University - Integrated School that has completed the Disaster Readiness and Risk Reduction course in handling fire hazards through Minecraft Java Edition. This platform is suitable because fire DRRR is challenging to learn in a practical setting as well as questionable with regard to supplementing the successful implementation of textbook knowledge into actual practice. The purpose of this study is to acknowledge whether Minecraft can be a suitable environment to familiarize oneself to fire DRRR. The objectives are achieved through utilizing Minecraft in simulating fire scenarios which allows the participants to freely act upon and practice fire DRRR. The experiment was divided into the grounding and validation phase, where researchers observed the performance of the participants in the simulation. A pre-simulation and post-simulation survey was given to acknowledge the change in participants’ perception of being able to utilize fire DRRR procedures and their vulnerabilities. The paired t-test was utilized, showing significant differences in the pre-simulation and post-simulation survey scores, thus, insinuating improved judgment of DRRR, lessening their vulnerabilities in the possibility of encountering a fire hazard. This research poses a model for future research which can gather more participants and dwell on more complex codes outside just command blocks and into the code lines of Minecraft itself.

Keywords: minecraft, DRRR, fire, disaster, simulation

Procedia PDF Downloads 127
507 Autistic Traits and Multisensory Integration–Using a Size-Weight Illusion Paradigm

Authors: Man Wai Lei, Charles Mark Zaroff

Abstract:

Objective: A majority of studies suggest that people with Autism Spectrum Disorder (ASD) have multisensory integration deficits. However, normal and even supranormal multisensory integration abilities have also been reported. Additionally, little of this work has been undertaken utilizing a dimensional conceptualization of ASD; i.e., a broader autism phenotype. Utilizing methodology that controls for common potential confounds, the current study aimed to examine if deficits in multisensory integration are associated with ASD traits in a non-clinical population. The contribution of affective versus non-affective components of sensory hypersensitivity to multisensory integration was also examined. Methods: Participants were 147 undergraduate university students in Macau, a Special Administrative Region of China, of Chinese ethnicity, aged 16 to 21 (Mean age = 19.13; SD = 1.07). Participants completed the Autism-Spectrum Quotient, the Sensory Perception Quotient, and the Adolescent/Adult Sensory Profile, in order to measure ASD traits, non-affective, and affective aspects of sensory/perceptual hypersensitivity, respectively. In order to explore multisensory integration across visual and haptic domains, participants were asked to judge which one of two equally weighted, but different sized cylinders was heavier, as a means of detecting the presence of the size-weight illusion (SWI). Results: ASD trait level was significantly and negatively correlated with susceptibility to the SWI (p < 0.05); this correlation was not associated with either accuracy in weight discrimination or gender. Examining the top decile of the non-normally distributed SWI scores revealed a significant negative association with sensation avoiding, but not other aspects of effective or non-effective sensory hypersensitivity. Conclusion and Implications: Within the normal population, a greater degree of ASD traits is associated with a lower likelihood of multisensory integration; echoing was often found in individuals with a clinical diagnosis of ASD, and providing further evidence for the dimensional nature of this disorder. This tendency appears to be associated with dysphoric emotional reactions to sensory input.

Keywords: Autism Spectrum Disorder, dimensional, multisensory integration, size-weight illusion

Procedia PDF Downloads 478
506 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 342
505 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations

Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar

Abstract:

Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.

Keywords: cache behaviour, network-on-chip, performance profiling, vectorization

Procedia PDF Downloads 189
504 The Influence of Immunity on the Behavior and Dignity of Judges

Authors: D. Avnieli

Abstract:

Immunity of judges from liability represents a departure from the principle that all are equal under the law, and that victims may be granted compensation from their offenders. The purpose of the study is to determine if judicial immunity coincides with the need to ensure the existence of highly independent and incorruptible judiciary. Judges are immune from civil and criminal liability for their judicial acts. Judicial immunity is justified by the need to maintain complete independence and discretion of the judiciary. Scholars and judges believe that absolute immunity is needed to shield judges from pressures, threats, or outside interference. It is commonly accepted, that judges should be free to perform their judicial role in accordance with their assessment of the fact and their understanding of the law, without any restrictions, influences, inducements or interferences. In most countries, immunity applies when judges act in excess of jurisdiction. In some countries, it applies even when they act maliciously or corruptly. The only exception to absolute immunity applicable in all judicial systems is when judges act without jurisdiction over the subject matter. The Israeli Supreme Court recently decided to embrace absolute immunity and strike off a lawsuit of a refugee, who was unlawfully incarcerated. The Court ruled that the plaintiff cannot sue the State or the judge for damages. The questions of malice, dignity, and public scrutiny were not discussed. This paper, based on comparative analysis of many cases, aims to determine if immunity affects the dignity and behavior of judges. It demonstrates that most judges maintain their dignity and ethical code of behavior, but sometimes do not hesitate to act consciously in excess of jurisdiction, and in rare cases even corruptly. Therefore, in order to maintain independent and incorruptible judiciary, immunity should not be applied where judges act consciously in excess of jurisdiction or with malicious incentives.

Keywords: incorruptible judiciary, immunity, independent, judicial, judges, jurisdiction

Procedia PDF Downloads 99
503 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects

Authors: Ma Yuzhe, Burra Venkata Durga Kumar

Abstract:

The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.

Keywords: Linux, operating system, system management, security

Procedia PDF Downloads 103