Search results for: mapping code ultrawideband
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2474

Search results for: mapping code ultrawideband

1874 Behavior Factors Evaluation for Reinforced Concrete Structures

Authors: Muhammad Rizwan, Naveed Ahmad, Akhtar Naeem Khan

Abstract:

Seismic behavior factors are evaluated for the performance assessment of low rise reinforced concrete RC frame structures based on experimental study of unidirectional dynamic shake table testing of two 1/3rd reduced scaled two storey frames, with a code confirming special moment resisting frame (SMRF) model and a noncompliant model of similar characteristics but built in low strength concrete .The models were subjected to a scaled accelerogram record of 1994 Northridge earthquake to deformed the test models to final collapse stage in order to obtain the structural response parameters. The fully compliant model was observed with more stable beam-sway response, experiencing beam flexure yielding and ground-storey column base yielding upon subjecting to 100% of the record. The response modification factor - R factor obtained for the code complaint and deficient prototype structures were 7.5 and 4.5 respectively, which is about 10% and 40% less than the UBC-97 specified value for special moment resisting reinforced concrete frame structures.

Keywords: Northridge 1994 earthquake, reinforced concrete frame, response modification factor, shake table testing

Procedia PDF Downloads 169
1873 Collapse Capacity Assessment of Inelastic Structures under Seismic Sequences

Authors: Shahrzad Mohammadi, Ghasem Boshrouei Sharq

Abstract:

All seismic design codes are based on the determination of the design earthquake without taking into account the effects of aftershocks in the design practice. In regions with a high level of seismicity, the occurrence of several aftershocks of various magnitudes and different time lags is very likely. This research aims to estimate the collapse capacity of a 10-story steel bundled tube moment frame subjected to as-recorded seismic sequences. The studied structure is designed according to the seismic regulations of the fourth revision of the Iranian code of practice for the seismic-resistant design of buildings (Code No.2800). A series of incremental dynamic analyses (IDA) is performed up to the collapse level of the intact structure. Then, in order to demonstrate the effects of aftershock events on the collapse vulnerability of the building, aftershock IDA analyzes are carried out. To gain deeper insight, collapse fragility curves are developed and compared for both series. Also, a study on the influence of various ground motion characteristics on collapse capacity is carried out. The results highlight the importance of considering the decisive effects of aftershocks in seismic codes due to their contribution to the occurrence of collapse.

Keywords: IDA, aftershock, bundled tube frame, fragility assessment, GM characteristics, as-recorded seismic sequences

Procedia PDF Downloads 136
1872 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 131
1871 The Social Aspects of Code-Switching in Online Interaction: The Case of Saudi Bilinguals

Authors: Shirin Alabdulqader

Abstract:

This research aims to investigate the concept of code-switching (CS) between English, Arabic, and the CS practices of Saudi online users via a Translanguaging (TL) lens for more inclusive view towards the nature of the data from the study. It employs Digitally Mediated Communication (DMC), specifically the WhatsApp and Twitter platforms, in order to understand how the users employ online resources to communicate with others on a daily basis. This project looks beyond language and considers the multimodal affordances (visual and audio means) that interlocutors utilise in their online communicative practices to shape their online social existence. This exploratory study is based on a data-driven interpretivist epistemology as it aims to understand how meaning (reality) is created by individuals within different contexts. This project used a mixed-method approach, combining a qualitative and a quantitative approach. In the former, data were collected from online chats and interview responses, while in the latter a questionnaire was employed to understand the frequency and relations between the participants’ linguistic and non-linguistic practices and their social behaviours. The participants were eight bilingual Saudi nationals (both men and women, aged between 20 and 50 years old) who interacted with others online. These participants provided their online interactions, participated in an interview and responded to a questionnaire. The study data were gathered from 194 WhatsApp chats and 122 Tweets. These data were analysed and interpreted according to three levels: conversational turn taking and CS; the linguistic description of the data; and CS and persona. This project contributes to the emerging field of analysing online Arabic data systematically, and the field of multimodality and bilingual sociolinguistics. The findings are reported for each of the three levels. For conversational turn taking, the CS analysis revealed that it was used to accomplish negotiation and develop meaning in the conversation. With regard to the linguistic practices of the CS data, the majority of the code-switched words were content morphemes. The third level of data interpretation is CS and its relationship with identity; two types of identity were indexed; absolute identity and contextual identity. This study contributes to the DMC literature and bridges some of the existing gaps. The findings of this study are that CS by its nature, and most of the findings, if not all, support the notion of TL that multiliteracy is one’s ability to decode multimodal communication, and that this multimodality contributes to the meaning. Either this is applicable to the online affordances used by monolinguals or multilinguals and perceived not only by specific generations but also by any online multiliterates, the study provides the linguistic features of CS utilised by Saudi bilinguals and it determines the relationship between these features and the contexts in which they appear.

Keywords: social media, code-switching, translanguaging, online interaction, saudi bilinguals

Procedia PDF Downloads 127
1870 Survey Based Data Security Evaluation in Pakistan Financial Institutions against Malicious Attacks

Authors: Naveed Ghani, Samreen Javed

Abstract:

In today’s heterogeneous network environment, there is a growing demand for distrust clients to jointly execute secure network to prevent from malicious attacks as the defining task of propagating malicious code is to locate new targets to attack. Residual risk is always there no matter what solutions are implemented or whet so ever security methodology or standards being adapted. Security is the first and crucial phase in the field of Computer Science. The main aim of the Computer Security is gathering of information with secure network. No one need wonder what all that malware is trying to do: It's trying to steal money through data theft, bank transfers, stolen passwords, or swiped identities. From there, with the help of our survey we learn about the importance of white listing, antimalware programs, security patches, log files, honey pots, and more used in banks for financial data protection but there’s also a need of implementing the IPV6 tunneling with Crypto data transformation according to the requirements of new technology to prevent the organization from new Malware attacks and crafting of its own messages and sending them to the target. In this paper the writer has given the idea of implementing IPV6 Tunneling Secessions on private data transmission from financial organizations whose secrecy needed to be safeguarded.

Keywords: network worms, malware infection propagating malicious code, virus, security, VPN

Procedia PDF Downloads 352
1869 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 344
1868 Prioritizing Ecosystem Services for South-Central Regions of Chile: An Expert-Based Spatial Multi-Criteria Approach

Authors: Yenisleidy Martinez Martinez, Yannay Casas-Ledon, Jo Dewulf

Abstract:

The ecosystem services (ES) concept has contributed to draw attention to the benefits ecosystems generate for people and how necessary natural resources are for human well-being. The identification and prioritization of the ES constitute the first steps to undertake conservation and valuation initiatives on behalf of people. Additionally, mapping the supply of ES is a powerful tool to support decision making regarding the sustainable management of landscape and natural resources. In this context, the present study aimed to identify, prioritize and map the primary ES in Biobio and Nuble regions using a methodology that combines expert judgment, multi-attribute evaluation methods, and Geographic Information Systems (GIS). Firstly, scores about the capacity of different land use/cover types to supply ES and the importance attributed to each service were obtained from experts and stakeholders via an online survey. Afterward, the ES assessment matrix was constructed, and the weighted linear combination (WLC) method was applied to mapping the overall capacity of supply of provisioning, regulating and maintenance, and cultural services. Finally, prioritized ES for the study area were selected and mapped. The results suggest that native forests, wetlands, and water bodies have the highest supply capacities of ES, while urban and industrial areas and bare areas have a very low supply of services. On the other hand, fourteen out of twenty-nine services were selected by experts and stakeholders as the most relevant for the regions. The spatial distribution of ES has shown that the Andean Range and part of the Coastal Range have the highest ES supply capacity, mostly regulation and maintenance and cultural ES. This performance is related to the presence of native forests, water bodies, and wetlands in those zones. This study provides specific information about the most relevant ES in Biobio and Nuble according to the opinion of local stakeholders and the spatial identification of areas with a high capacity to provide services. These findings could be helpful as a reference by planners and policymakers to develop landscape management strategies oriented to preserve the supply of services in both regions.

Keywords: ecosystem services, expert judgment, mapping, multi-criteria decision making, prioritization

Procedia PDF Downloads 124
1867 Evaluation of a Data Fusion Algorithm for Detecting and Locating a Radioactive Source through Monte Carlo N-Particle Code Simulation and Experimental Measurement

Authors: Hadi Ardiny, Amir Mohammad Beigzadeh

Abstract:

Through the utilization of a combination of various sensors and data fusion methods, the detection of potential nuclear threats can be significantly enhanced by extracting more information from different data. In this research, an experimental and modeling approach was employed to track a radioactive source by combining a surveillance camera and a radiation detector (NaI). To run this experiment, three mobile robots were utilized, with one of them equipped with a radioactive source. An algorithm was developed in identifying the contaminated robot through correlation between camera images and camera data. The computer vision method extracts the movements of all robots in the XY plane coordinate system, and the detector system records the gamma-ray count. The position of the robots and the corresponding count of the moving source were modeled using the MCNPX simulation code while considering the experimental geometry. The results demonstrated a high level of accuracy in finding and locating the target in both the simulation model and experimental measurement. The modeling techniques prove to be valuable in designing different scenarios and intelligent systems before initiating any experiments.

Keywords: nuclear threats, radiation detector, MCNPX simulation, modeling techniques, intelligent systems

Procedia PDF Downloads 113
1866 Learning Mandarin Chinese as a Foreign Language in a Bilingual Context: Adult Learners’ Perceptions of the Use of L1 Maltese and L2 English in Mandarin Chinese Lessons in Malta

Authors: Christiana Gauci-Sciberras

Abstract:

The first language (L1) could be used in foreign language teaching and learning as a pedagogical tool to scaffold new knowledge in the target language (TL) upon linguistic knowledge that the learner already has. In a bilingual context, code-switching between the two languages usually occurs in classrooms. One of the reasons for code-switching is because both languages are used for scaffolding new knowledge. This research paper aims to find out why both the L1 (Maltese) and the L2 (English) are used in the classroom of Mandarin Chinese as a foreign language (CFL) in the bilingual context of Malta. This research paper also aims to find out the learners’ perceptions of the use of a bilingual medium of instruction. Two research methods were used to collect qualitative data; semi-structured interviews with adult learners of Mandarin Chinese and lesson observations. These two research methods were used so that the data collected in the interviews would be triangulated with data collected in lesson observations. The L1 (Maltese) is the language of instruction mostly used. The teacher and the learners switch to the L2 (English) or to any other foreign language according to the need at a particular instance during the lesson.

Keywords: Chinese, bilingual, pedagogical purpose of L1 and L2, CFL acquisition

Procedia PDF Downloads 192
1865 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU

Authors: Ali Abdul Kadhim, Fue Lien

Abstract:

Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.

Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model

Procedia PDF Downloads 204
1864 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform

Authors: S. Hutasavi, D. Chen

Abstract:

The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.

Keywords: built-up area extraction, google earth engine, adaptive thresholding method, rapid mapping

Procedia PDF Downloads 119
1863 Mapping Thermal Properties Using Resistivity, Lithology and Thermal Conductivity Measurements

Authors: Riccardo Pasquali, Keith Harlin, Mark Muller

Abstract:

The ShallowTherm project is focussed on developing and applying a methodology for extrapolating relatively sparsely sampled thermal conductivity measurements across Ireland using mapped Litho-Electrical (LE) units. The primary data used consist of electrical resistivities derived from the Geological Survey Ireland Tellus airborne electromagnetic dataset, GIS-based maps of Irish geology, and rock thermal conductivities derived from both the current Irish Ground Thermal Properties (IGTP) database and a new programme of sampling and laboratory measurement. The workflow has been developed across three case-study areas that sample a range of different calcareous, arenaceous, argillaceous, and volcanic lithologies. Statistical analysis of resistivity data from individual geological formations has been assessed and integrated with detailed lithological descriptions to define distinct LE units. Thermal conductivity measurements from core and hand samples have been acquired for every geological formation within each study area. The variability and consistency of thermal conductivity measurements within each LE unit is examined with the aim of defining a characteristic thermal conductivity (or range of thermal conductivities) for each LE unit. Mapping of LE units, coupled with characteristic thermal conductivities, provides a method of defining thermal conductivity properties at a regional scale and facilitating the design of ground source heat pump closed-loop collectors.

Keywords: thermal conductivity, ground source heat pumps, resistivity, heat exchange, shallow geothermal, Ireland

Procedia PDF Downloads 173
1862 Inviscid Steady Flow Simulation Around a Wing Configuration Using MB_CNS

Authors: Muhammad Umar Kiani, Muhammad Shahbaz, Hassan Akbar

Abstract:

Simulation of a high speed inviscid steady ideal air flow around a 2D/axial-symmetry body was carried out by the use of mb_cns code. mb_cns is a program for the time-integration of the Navier-Stokes equations for two-dimensional compressible flows on a multiple-block structured mesh. The flow geometry may be either planar or axisymmetric and multiply-connected domains can be modeled by patching together several blocks. The main simulation code is accompanied by a set of pre and post-processing programs. The pre-processing programs scriptit and mb_prep start with a short script describing the geometry, initial flow state and boundary conditions and produce a discretized version of the initial flow state. The main flow simulation program (or solver as it is sometimes called) is mb_cns. It takes the files prepared by scriptit and mb_prep, integrates the discrete form of the gas flow equations in time and writes the evolved flow data to a set of output files. This output data may consist of the flow state (over the whole domain) at a number of instants in time. After integration in time, the post-processing programs mb_post and mb_cont can be used to reformat the flow state data and produce GIF or postscript plots of flow quantities such as pressure, temperature and Mach number. The current problem is an example of supersonic inviscid flow. The flow domain for the current problem (strake configuration wing) is discretized by a structured grid and a finite-volume approach is used to discretize the conservation equations. The flow field is recorded as cell-average values at cell centers and explicit time stepping is used to update conserved quantities. MUSCL-type interpolation and one of three flux calculation methods (Riemann solver, AUSMDV flux splitting and the Equilibrium Flux Method, EFM) are used to calculate inviscid fluxes across cell faces.

Keywords: steady flow simulation, processing programs, simulation code, inviscid flux

Procedia PDF Downloads 424
1861 Applications of Space Technology in Flood Risk Mapping in Parts of Haryana State, India

Authors: B. S. Chaudhary

Abstract:

The severity and frequencies of different disasters on the globe is increasing in recent years. India is also facing the disasters in the form of drought, cyclone, earthquake, landslides, and floods. One of the major causes of disasters in northern India is flood. There are great losses and extensive damage to the agricultural crops, property, human, and animal life. This is causing environmental imbalances at places. The annual global figures for losses due to floods run into over 2 billion dollar. India is a vast country with wide variations in climate and topography. Due to widespread and heavy rainfall during the monsoon months, floods of varying magnitude occur all over the country during June to September. The magnitude depends upon the intensity of rainfall, its duration and also the ground conditions at the time of rainfall. Haryana, one of the agriculturally dominated northern states is also suffering from a number of disasters such as floods, desertification, soil erosion, land degradation etc. Earthquakes are also frequently occurring but of small magnitude so are not causing much concern and damage. Most of the damage in Haryana is due to floods. Floods in Haryana have occurred in 1978, 1988, 1993, 1995, 1998, and 2010 to mention a few. The present paper deals with the Remote Sensing and GIS applications in preparing flood risk maps in parts of Haryana State India. The satellite data of various years have been used for mapping of flood affected areas. The Flooded areas have been interpreted both visually and digitally and two classes-flooded and receded water/ wet areas have been identified for each year. These have been analyzed in GIS environment to prepare the risk maps. This shows the areas of high, moderate and low risk depending on the frequency of flood witness. The floods leave a trail of suffering in the form of unhygienic conditions due to improper sanitation, water logging, filth littered in the area, degradation of materials and unsafe drinking water making the people prone to many type diseases in short and long run. Attempts have also been made to enumerate the causes of floods. The suggestions are given for mitigating the fury of floods and proper management issues related to evacuation and safe places nearby.

Keywords: flood mapping, GIS, Haryana, India, remote sensing, space technology

Procedia PDF Downloads 208
1860 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 122
1859 Review and Comparison of Iran`s Sixteenth Topic of the Building with the Ranking System of the Water Sector Lead to Improve the Criteria of the Sixteenth Topic

Authors: O. Fatemi

Abstract:

Considering growing building construction industry in developing countries and sustainable development concept, as well as the importance of taking care of the future generations, codifying buildings scoring system based on environmental criteria, has always been a subject for discussion. The existing systems cannot be used for all the regions due to several reasons, including but not limited to variety in regional variables. In this article, the most important common LEED (Leadership in Energy and Environmental Design) and BREEAM (Building Research Establishment Environmental Assessment Method) common and Global environmental scoring systems, used in UK, USA, and Japan, respectively, have been discussed and compared with a special focus on CASBEE (Comprehensive Assessment System for Built Environment Efficiency), to credit assigning field (weighing and scores systems) as well as sustainable development criteria in each system. Then, converging and distinct fields of the foregoing systems are examined considering National Iranian Building Code. Furthermore, the common credits in the said systems not mentioned in National Iranian Building Code have been identified. These credits, which are generally included in well-known fundamental principles in sustainable development, may be considered as offered options for the Iranian building environmental scoring system. It is suggested that one of the globally and commonly accepted systems is chosen considering national priorities in order to offer an effective method for buildings environmental scoring, and then, a part of credits is added and/or removed, or a certain credit score is changed, and eventually, a new scoring system with a new title is developed for the country. Evidently, building construction industry highly affects the environment, economy, efficiency, and health of the relevant occupants. Considering the growing trend of cities and construction, achieving building scoring systems based on environmental criteria has always been a matter of discussion. The existing systems cannot be used for all the regions due to several reasons, including but not limited to variety in regional variables.

Keywords: scoring system, sustainability assessment, water efficiency, national Iranian building code

Procedia PDF Downloads 176
1858 Geophysical Mapping of the Groundwater Aquifer System in Gode Area, Northeastern Hosanna, Ethiopia

Authors: Esubalew Yehualaw Melaku

Abstract:

In this study, two basic geophysical methods are applied for mapping the groundwater aquifer system in the Gode area along the Guder River, northeast of Hosanna town, near the western margin of the Central Main Ethiopian Rift. The main target of the study is to map the potential aquifer zone and investigate the groundwater potential for current and future development of the resource in the Gode area. The geophysical methods employed in this study include, Vertical Electrical Sounding (VES) and magnetic survey techniques. Electrical sounding was used to examine and map the depth to the potential aquifer zone of the groundwater and its distribution over the area. On the other hand, a magnetic survey was used to delineate contact between lithologic units and geological structures. The 2D magnetic modeling and the geoelectric sections are used for the identification of weak zones, which control the groundwater flow and storage system. The geophysical survey comprises of twelve VES readings collected by using a Schlumberger array along six profile lines and more than four hundred (400) magnetic readings at about 10m station intervals along four profiles and 20m along three random profiles. The study result revealed that the potential aquifer in the area is obtained at a depth range from 45m to 92m. This is the response of the highly weathered/ fractured ignimbrite and pumice layer with sandy soil, which is the main water-bearing horizon. Overall, in the neighborhood of four VES points, VES- 2, VES- 3, VES-10, and VES-11, shows good water-bearing zones in the study area.

Keywords: vertical electrical sounding, magnetic survey, aquifer, groundwater potential

Procedia PDF Downloads 122
1857 Family Homicide: A Comparison of Rural and Urban Communities in California

Authors: Bohsiu Wu

Abstract:

This study compares the differences in social dynamics between rural and urban areas in California to explain homicides involving family members. It is hypothesized that rural homicides are better explained by social isolation and lack of intervention resources, whereas urban homicides are attributed to social disadvantage factors. Several critical social dynamics including social isolation, social disadvantages, acculturation, and intervention resources were entered in a hierarchical linear model (HLM) to examine whether county-level factors affect how each specific dynamic performs at the ZIP code level, a proxy measure for communities. Homicide data are from the Supplementary Homicide Report for all 58 counties in California from 1997 to 1999. Predictors at both the county and ZIP code levels are derived from the 2000 US census. Preliminary results from a HLM analysis show that social isolation is a significant but moderate predictor to explain rural family homicide and various social disadvantage factors are significant factors accounting for urban family homicide. Acculturation has little impact. Rurality and urbanity appear to interact with various social dynamics in explaining family homicide. The implications for prevention at both the county and community level as well as directions for future study on the differences between rural and urban locales are explored in the paper.

Keywords: communities, family, HLM, homicide, rural, urban

Procedia PDF Downloads 322
1856 A Case Study on the Collapse Assessment of the Steel Moment-Frame Setback High-Rise Tower

Authors: Marzie Shahini, Rasoul Mirghaderi

Abstract:

This paper describes collapse assessments of a steel moment-frame high-rise tower with setback irregularity, designed per the 2010 ASCE7 code, under spectral-matched ground motion records. To estimate a safety margin against life-threatening collapse, an analytical model of the tower is subjected to a suite of ground motions with incremental intensities from maximum considered earthquake hazard level to the incipient collapse level. Capability of the structural system to collapse prevention is evaluated based on the similar methodology reported in FEMA P695. Structural performance parameters in terms of maximum/mean inter-story drift ratios, residual drift ratios, and maximum plastic hinge rotations are also compared to the acceptance criteria recommended by the TBI Guidelines. The results demonstrate that the structural system satisfactorily safeguards the building against collapse. Moreover, for this tower, the code-specified requirements in ASCE7-10 are reasonably adequate to satisfy seismic performance criteria developed in the TBI Guidelines for the maximum considered earthquake hazard level.

Keywords: high-rise buildings, set back, residual drift, seismic performance

Procedia PDF Downloads 257
1855 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data

Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton

Abstract:

The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.

Keywords: analytics, digitization, industry 4.0, manufacturing

Procedia PDF Downloads 110
1854 A System Architecture for Hand Gesture Control of Robotic Technology: A Case Study Using a Myo™ Arm Band, DJI Spark™ Drone, and a Staubli™ Robotic Manipulator

Authors: Sebastian van Delden, Matthew Anuszkiewicz, Jayse White, Scott Stolarski

Abstract:

Industrial robotic manipulators have been commonplace in the manufacturing world since the early 1960s, and unmanned aerial vehicles (drones) have only begun to realize their full potential in the service industry and the military. The omnipresence of these technologies in their respective fields will only become more potent in coming years. While these technologies have greatly evolved over the years, the typical approach to human interaction with these robots has not. In the industrial robotics realm, a manipulator is typically jogged around using a teach pendant and programmed using a networked computer or the teach pendant itself via a proprietary software development platform. Drones are typically controlled using a two-handed controller equipped with throttles, buttons, and sticks, an app that can be downloaded to one’s mobile device, or a combination of both. This application-oriented work offers a novel approach to human interaction with both unmanned aerial vehicles and industrial robotic manipulators via hand gestures and movements. Two systems have been implemented, both of which use a Myo™ armband to control either a drone (DJI Spark™) or a robotic arm (Stäubli™ TX40). The methodologies developed by this work present a mapping of armband gestures (fist, finger spread, swing hand in, swing hand out, swing arm left/up/down/right, etc.) to either drone or robot arm movements. The findings of this study present the efficacy and limitations (precision and ergonomic) of hand gesture control of two distinct types of robotic technology. All source code associated with this project will be open sourced and placed on GitHub. In conclusion, this study offers a framework that maps hand and arm gestures to drone and robot arm control. The system has been implemented using current ubiquitous technologies, and these software artifacts will be open sourced for future researchers or practitioners to use in their work.

Keywords: human robot interaction, drones, gestures, robotics

Procedia PDF Downloads 154
1853 Cognitive Model of Analogy Based on Operation of the Brain Cells: Glial, Axons and Neurons

Authors: Ozgu Hafizoglu

Abstract:

Analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with attributional, deep structural, casual relations that are essential to learning, to innovation in artificial worlds, and to discovery in science. Cognitive Model of Analogy (CMA) leads and creates information pattern transfer within and between domains and disciplines in science. This paper demonstrates the Cognitive Model of Analogy (CMA) as an evolutionary approach to scientific research. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions. In this paper, the model of analogical reasoning is created based on brain cells, their fractal, and operational forms within the system itself. Visualization techniques are used to show correspondences. Distinct phases of the problem-solving processes are divided thusly: encoding, mapping, inference, and response. The system is revealed relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain cells: glial cells, axons, axon terminals, and neurons, relative to matching conditions of analogical reasoning and relational information. It’s found that encoding, mapping, inference, and response processes in four-term analogical reasoning are corresponding with the fractal and operational forms of brain cells: glial, axons, and neurons.

Keywords: analogy, analogical reasoning, cognitive model, brain and glials

Procedia PDF Downloads 181
1852 Localization of Frontal and Temporal Speech Areas in Brain Tumor Patients by Their Structural Connections with Probabilistic Tractography

Authors: B.Shukir, H.Woo, P.Barzo, D.Kis

Abstract:

Preoperative brain mapping in tumors involving the speech areas has an important role to reduce surgical risks. Functional magnetic resonance imaging (fMRI) is the gold standard method to localize cortical speech areas preoperatively, but its availability in clinical routine is difficult. Diffusion MRI based probabilistic tractography is available in head MRI. It’s used to segment cortical subregions by their structural connectivity. In our study, we used probabilistic tractography to localize the frontal and temporal cortical speech areas. 15 patients with left frontal tumor were enrolled to our study. Speech fMRI and diffusion MRI acquired preoperatively. The standard automated anatomical labelling atlas 3 (AAL3) cortical atlas used to define 76 left frontal and 118 left temporal potential speech areas. 4 types of tractography were run according to the structural connection of these regions to the left arcuate fascicle (FA) to localize those cortical areas which have speech functions: 1, frontal through FA; 2, frontal with FA; 3, temporal to FA; 4, temporal with FA connections were determined. Thresholds of 1%, 5%, 10% and 15% applied. At each level, the number of affected frontal and temporal regions by fMRI and tractography were defined, the sensitivity and specificity were calculated. At the level of 1% threshold showed the best results. Sensitivity was 61,631,4% and 67,1523,12%, specificity was 87,210,4% and 75,611,37% for frontal and temporal regions, respectively. From our study, we conclude that probabilistic tractography is a reliable preoperative technique to localize cortical speech areas. However, its results are not feasible that the neurosurgeon rely on during the operation.

Keywords: brain mapping, brain tumor, fMRI, probabilistic tractography

Procedia PDF Downloads 154
1851 Modeling and Analysis of DFIG Based Wind Power System Using Instantaneous Power Components

Authors: Jaimala Ghambir, Tilak Thakur, Puneet Chawla

Abstract:

As per the statistical data, the Doubly-fed Induction Generator (DFIG) based wind turbine with variable speed and variable pitch control is the most common wind turbine in the growing wind market. This machine is usually used on the grid connected wind energy conversion system to satisfy grid code requirements such as grid stability, fault ride through (FRT), power quality improvement, grid synchronization and power control etc. Though the requirements are not fulfilled directly by the machine, the control strategy is used in both the stator as well as rotor side along with power electronic converters to fulfil the requirements stated above. To satisfy the grid code requirements of wind turbine, usually grid side converter is playing a major role. So in order to improve the operation capacity of wind turbine under critical situation, the intensive study of both machine side converter control and grid side converter control is necessary In this paper DFIG is modeled using power components as variables and the performance of the DFIG system is analysed under grid voltage fluctuations. The voltage fluctuations are made by lowering and raising the voltage values in the utility grid intentionally for the purpose of simulation keeping in view of different grid disturbances.

Keywords: DFIG, dynamic modeling, DPC, sag, swell, voltage fluctuations, FRT

Procedia PDF Downloads 459
1850 Fusion Models for Cyber Threat Defense: Integrating Clustering, Random Forests, and Support Vector Machines to Against Windows Malware

Authors: Azita Ramezani, Atousa Ramezani

Abstract:

In the ever-escalating landscape of windows malware the necessity for pioneering defense strategies turns into undeniable this study introduces an avant-garde approach fusing the capabilities of clustering random forests and support vector machines SVM to combat the intricate web of cyber threats our fusion model triumphs with a staggering accuracy of 98.67 and an equally formidable f1 score of 98.68 a testament to its effectiveness in the realm of windows malware defense by deciphering the intricate patterns within malicious code our model not only raises the bar for detection precision but also redefines the paradigm of cybersecurity preparedness this breakthrough underscores the potential embedded in the fusion of diverse analytical methodologies and signals a paradigm shift in fortifying against the relentless evolution of windows malicious threats as we traverse through the dynamic cybersecurity terrain this research serves as a beacon illuminating the path toward a resilient future where innovative fusion models stand at the forefront of cyber threat defense.

Keywords: fusion models, cyber threat defense, windows malware, clustering, random forests, support vector machines (SVM), accuracy, f1-score, cybersecurity, malicious code detection

Procedia PDF Downloads 66
1849 Medical Image Watermark and Tamper Detection Using Constant Correlation Spread Spectrum Watermarking

Authors: Peter U. Eze, P. Udaya, Robin J. Evans

Abstract:

Data hiding can be achieved by Steganography or invisible digital watermarking. For digital watermarking, both accurate retrieval of the embedded watermark and the integrity of the cover image are important. Medical image security in Teleradiology is one of the applications where the embedded patient record needs to be extracted with accuracy as well as the medical image integrity verified. In this research paper, the Constant Correlation Spread Spectrum digital watermarking for medical image tamper detection and accurate embedded watermark retrieval is introduced. In the proposed method, a watermark bit from a patient record is spread in a medical image sub-block such that the correlation of all watermarked sub-blocks with a spreading code, W, would have a constant value, p. The constant correlation p, spreading code, W and the size of the sub-blocks constitute the secret key. Tamper detection is achieved by flagging any sub-block whose correlation value deviates by more than a small value, ℇ, from p. The major features of our new scheme include: (1) Improving watermark detection accuracy for high-pixel depth medical images by reducing the Bit Error Rate (BER) to Zero and (2) block-level tamper detection in a single computational process with simultaneous watermark detection, thereby increasing utility with the same computational cost.

Keywords: Constant Correlation, Medical Image, Spread Spectrum, Tamper Detection, Watermarking

Procedia PDF Downloads 185
1848 Penalization of Transnational Crimes in the Domestic Legal Order: The Case of Poland

Authors: Magda Olesiuk-Okomska

Abstract:

The degree of international interdependence has grown significantly. Poland is a party to nearly 1000 binding multilateral treaties, including international legal instruments devoted to criminal matters and obliging the state to penalize certain crimes. The paper presents results of a theoretical research conducted as a part of doctoral research. The main hypothesis assumed that there was a separate category of crimes to penalization of which Poland was obliged under international legal instruments; that a catalogue of such crimes and a catalogue of international legal instruments providing for Poland’s international obligations had never been compiled in the domestic doctrine, thus there was no mechanism for monitoring implementation of such obligations. In the course of the research, a definition of transnational crimes was discussed and confronted with notions of international crimes, treaty crimes, as well as cross-border crimes. A list of transnational crimes penalized in the Polish Penal Code as well as in non-code criminal law regulations was compiled; international legal instruments, obliging Poland to criminalize and penalize specific conduct, were enumerated and catalogued. It enabled the determination whether Poland’s international obligations were implemented in domestic legislation, as well as the formulation of de lege lata and de lege ferenda postulates. Implemented research methods included inter alia a dogmatic and legal method, an analytical method and desk research.

Keywords: international criminal law, transnational crimes, transnational criminal law, treaty crimes

Procedia PDF Downloads 219
1847 Automation of AAA Game Development Using AI

Authors: Branden Heng, Harsheni Siddharthan, Allison Tseng, Paul Toprac, Sarah Abraham, Etienne Vouga

Abstract:

The goal of this project was to evaluate and document the capabilities and limitations of AI tools for empowering small teams to create high-budget, high-profile (AAA) 3D games typically developed by large studios. Two teams of novice game developers attempted to create two different games using AI and Unreal Engine 5.3. First, the teams evaluated 60 AI art, design, sound, and programming tools by considering their capability, ease of use, cost, and license restrictions. Then, the teams used a shortlist of 12 AI tools for game development. During this process, the following tools were found to be the most productive: (i) ChatGPT 4.0 for both game and narrative concepts and documentation; (ii) Dall-E 3 and OpenArt for concept art; (iii) Beatoven for music drafting; (iv) ChatGPT 4.0 and Github Copilot for generating simple code and to complement human-made tutorials as an additional learning resource. While current generative AI may appear impressive at first glance, the assets they produce fall short of AAA industry standards. Generative AI tools are helpful when brainstorming ideas such as concept art and basic storylines, but they still cannot replace human input or creativity at this time. Regarding programming, AI can only effectively generate simple code and act as an additional learning resource. Thus, generative AI tools are, at best, tools to enhance developer productivity rather than as a system to replace developers.

Keywords: AAA games, AI, automation tools, game development

Procedia PDF Downloads 15
1846 Legal Provisions on Child Pornography in Bangladesh: A Comparative Study on South Asian Landscape

Authors: Monira Nazmi Jahan, Nusrat Jahan Nishat

Abstract:

'Child Pornography' is a sex crime that portrays illegal images and videos of a minor over the Internet and now has become a social concern with the increase of commission of this crime. The major objective of this paper is to identify and examine the laws relating to child pornography in Bangladesh and to compare this with other South Asian countries. In Bangladesh to prosecute under child pornography, provisions have been made in ‘Digital Security Act, 2018’ where it has been defined as involving child in areas of child sexuality or in sexuality and whoever commits the crime will be punished for 10 years imprisonment or 10 lac taka fine. In India, the crime is dealt with ‘The Protection of Children from Sexual Offences Act, 2012’ (POSCO) where the offenders for commission of this crime has been divided separately and has provision for punishments starting from three years to rigorous life imprisonment and shall also be liable to fine. In the Maldives, there is ‘Special Provisions Act to Deal with Child Sex Abuse Offenders, Act number 12/2009’. In this act it has been provided that a person is guilty of such an act if intentionally runs child prostitution, involves child in the creation of pornography or displays child’s sexual organ in pornography then shall be punished between 20 to 25 years of imprisonment. Nepal prosecutes this crime through ‘Act Relating to Children, 2018’ and the conviction of using child in prostitution or sexual services is imprisonment up to fifteen years and fine up to one hundred fifty thousand rupees. In Pakistan, child pornography is prosecuted with ‘Pakistan Penal Code Child Abuse Amendment Act, 2016’. This provides that one is guilty of this offence if he involves child with or without consent in such activities. It provides punishment for two to seven years of imprisonment or fine from two hundred thousand to seven hundred thousand rupees. In Bhutan child pornography is not explicitly addressed under the municipal laws. The Penal Code of Bhutan penalizes all kinds of pornography including child pornography under the provisions of computer pornography and the offence shall be a misdemeanor. Child Pornography is also prohibited under the ‘Child Care and Protection Act’. In Sri Lanka, ‘The Penal Code’ de facto criminalizes child prohibition and has a penalty of two to ten years and may also be liable to fine. The most shocking scenario exists in Afghanistan. There is no specific law for the protection of children from pornography, whereas this serious crime is present there. This paper will be conducted through a qualitative research method that is, the primary sources will be laws, and secondary sources will be journal articles and newspapers. The conclusion that can be drawn is except Afghanistan all other South Asian countries have laws for controlling this crime but still have loopholes. India has the most amended provisions. Nepal has no provision for fine, and Bhutan does not mention any specific punishment. Bangladesh compared to these countries, has a good piece of law; however, it also has space to broaden the laws for controlling child pornography.

Keywords: child abuse, child pornography, life imprisonment, penal code, South Asian countries

Procedia PDF Downloads 221
1845 The Decision-Making Mechanisms of Tax Regulations

Authors: Nino Pailodze, Malkhaz Sulashvili, Vladimer Kekenadze, Tea Khutsishvili, Irma Makharashvili, Aleksandre Kekenadze

Abstract:

In the nearest future among the important problems which Georgia has solve the most important is economic stability, that bases on fiscal policy and the proper definition of the its directions. The main source of the Budget revenue is the national income. The State uses taxes, loans and emission in order to create national income, were the principal weapon are taxes. As well as fiscal function of the fulfillment of the budget, tax systems successfully implement economic and social development and the regulatory functions of foreign economic relations. A tax is a mandatory, unconditional monetary payment to the budget made by a taxpayer in accordance with this Code, based on the necessary, nonequivalent and gratuitous character of the payment. Taxes shall be national and local. National taxes shall be the taxes provided for under this Code, the payment of which is mandatory across the whole territory of Georgia. Local taxes shall be the taxes provided for under this Code, introduced by normative acts of local self-government representative authorities (within marginal rates), the payment of which is mandatory within the territory of the relevant self-governing unit. National taxes have the leading role in tax systems, but also the local taxes have an importance role in tax systems. Exactly in the means of local taxes, the most part of the budget is formatted. National taxes shall be: income tax, profit tax, value added tax (VAT), excise tax, import duty, property tax shall be a local tax The property tax is one of the significant taxes in Georgia. The paper deals with the taxation mechanism that has been operated in Georgia. The above mention has the great influence in financial accounting. While comparing foreign legislation towards Georgian legislation we discuss the opportunity of using their experience. Also, we suggested recommendations in order to improve the tax system in financial accounting. In addition to accounting, which is regulated according the International Accounting Standards we have tax accounting, which is regulated by the Tax Code, various legal orders / regulations of the Minister of Finance. The rules are controlled by the tax authority, Revenue Service. The tax burden from the tax values are directly related to expenditures of the state from the emergence of the first day. Fiscal policy of the state is as well as expenditure of the state and decisions of taxation. In order to get the best and the most effective mobilization of funds, Government’s primary task is to decide the kind of taxation rules. Tax function is to reveal the substance of the act. Taxes have the following functions: distribution or the fiscal function; Control and regulatory functions. Foreign tax systems evolved in the different economic, political and social conditions influence. The tax systems differ greatly from each other: taxes, their structure, typing means, rates, the different levels of fiscal authority, the tax base, the tax sphere of action, the tax breaks.

Keywords: international accounting standards, financial accounting, tax systems, financial obligations

Procedia PDF Downloads 238