Search results for: graph code
368 Hemodynamics of a Cerebral Aneurysm under Rest and Exercise Conditions
Authors: Shivam Patel, Abdullah Y. Usmani
Abstract:
Physiological flow under rest and exercise conditions in patient-specific cerebral aneurysm models is numerically investigated. A finite-volume based code with BiCGStab as the linear equation solver is used to simulate unsteady three-dimensional flow field through the incompressible Navier-Stokes equations. Flow characteristics are first established in a healthy cerebral artery for both physiological conditions. The effect of saccular aneurysm on cerebral hemodynamics is then explored through a comparative analysis of the velocity distribution, nature of flow patterns, wall pressure and wall shear stress (WSS) against the reference configuration. The efficacy of coil embolization as a potential strategy of surgical intervention is also examined by modelling coil as a homogeneous and isotropic porous medium where the extended Darcy’s law, including Forchheimer and Brinkman terms, is applicable. The Carreau-Yasuda non-Newtonian blood model is incorporated to capture the shear thinning behavior of blood. Rest and exercise conditions correspond to normotensive and hypertensive blood pressures respectively. The results indicate that the fluid impingement on the outer wall of the arterial bend leads to abnormality in the distribution of wall pressure and WSS, which is expected to be the primary cause of the localized aneurysm. Exercise correlates with elevated flow velocity, vortex strength, wall pressure and WSS inside the aneurysm sac. With the insertion of coils in the aneurysm cavity, the flow bypasses the dilatation, leading to a decline in flow velocities and WSS. Particle residence time is observed to be lower under exercise conditions, a factor favorable for arresting plaque deposition and combating atherosclerosis.Keywords: 3D FVM, Cerebral aneurysm, hypertension, coil embolization, non-Newtonian fluid
Procedia PDF Downloads 234367 Topping Failure Analysis of Anti-Dip Bedding Rock Slopes Subjected to Crest Loads
Authors: Chaoyi Sun, Congxin Chen, Yun Zheng, Kaizong Xia, Wei Zhang
Abstract:
Crest loads are often encountered in hydropower, highway, open-pit and other engineering rock slopes. Toppling failure is one of the most common deformation failure types of anti-dip bedding rock slopes. Analysis on such failure of anti-dip bedding rock slopes subjected to crest loads has an important influence on engineering practice. Based on the step-by-step analysis approach proposed by Goodman and Bray, a geo-mechanical model was developed, and the related analysis approach was proposed for the toppling failure of anti-dip bedding rock slopes subjected to crest loads. Using the transfer coefficient method, a formulation was derived for calculating the residual thrust of slope toe and the support force required to meet the requirements of the slope stability under crest loads, which provided a scientific reference to design and support for such slopes. Through slope examples, the influence of crest loads on the residual thrust and sliding ratio coefficient was investigated for cases of different block widths and slope cut angles. The results show that there exists a critical block width for such slope. The influence of crest loads on the residual thrust is non-negligible when the block thickness is smaller than the critical value. Moreover, the influence of crest loads on the slope stability increases with the slope cut angle and the sliding ratio coefficient of anti-dip bedding rock slopes increases with the crest loads. Finally, the theoretical solutions and numerical simulations using Universal Distinct Element Code (UDEC) were compared, in which the consistent results show the applicability of both approaches.Keywords: anti-dip bedding rock slope, crest loads, stability analysis, toppling failure
Procedia PDF Downloads 179366 Meta Model for Optimum Design Objective Function of Steel Frames Subjected to Seismic Loads
Authors: Salah R. Al Zaidee, Ali S. Mahdi
Abstract:
Except for simple problems of statically determinate structures, optimum design problems in structural engineering have implicit objective functions where structural analysis and design are essential within each searching loop. With these implicit functions, the structural engineer is usually enforced to write his/her own computer code for analysis, design, and searching for optimum design among many feasible candidates and cannot take advantage of available software for structural analysis, design, and searching for the optimum solution. The meta-model is a regression model used to transform an implicit objective function into objective one and leads in turn to decouple the structural analysis and design processes from the optimum searching process. With the meta-model, well-known software for structural analysis and design can be used in sequence with optimum searching software. In this paper, the meta-model has been used to develop an explicit objective function for plane steel frames subjected to dead, live, and seismic forces. Frame topology is assumed as predefined based on architectural and functional requirements. Columns and beams sections and different connections details are the main design variables in this study. Columns and beams are grouped to reduce the number of design variables and to make the problem similar to that adopted in engineering practice. Data for the implicit objective function have been generated based on analysis and assessment for many design proposals with CSI SAP software. These data have been used later in SPSS software to develop a pure quadratic nonlinear regression model for the explicit objective function. Good correlations with a coefficient, R2, in the range from 0.88 to 0.99 have been noted between the original implicit functions and the corresponding explicit functions generated with meta-model.Keywords: meta-modal, objective function, steel frames, seismic analysis, design
Procedia PDF Downloads 245365 Comparing Energy Labelling of Buildings in Spain
Authors: Carolina Aparicio-Fernández, Alejandro Vilar Abad, Mar Cañada Soriano, Jose-Luis Vivancos
Abstract:
The building sector is responsible for 40% of the total energy consumption in the European Union (EU). Thus, implementation of strategies for quantifying and reducing buildings energy consumption is indispensable for reaching the EU’s carbon neutrality and energy efficiency goals. Each Member State has transposed the European Directives according to its own peculiarities: existing technical legislation, constructive solutions, climatic zones, etc. Therefore, in accordance with the Energy Performance of Buildings Directive, Member States have developed different Energy Performance Certificate schemes, using proposed energy simulation software-tool for each national or regional area. Energy Performance Certificates provide a powerful and comprehensive information to predict, analyze and improve the energy demand of new and existing buildings. Energy simulation software and databases allow a better understanding of the current constructive reality of the European building stock. However, Energy Performance Certificates still have to face several issues to consider them as a reliable and global source of information since different calculation tools are used that do not allow the connection between them. In this document, TRNSYS (TRaNsient System Simulation program) software is used to calculate the energy demand of a building, and it is compared with the energy labeling obtained with Spanish Official software-tools. We demonstrate the possibility of using not official software-tools to calculate the Energy Performance Certificate. Thus, this approach could be used throughout the EU and compare the results in all possible cases proposed by the EU Member States. To implement the simulations, an isolated single-family house with different construction solutions is considered. The results are obtained for every climatic zone of the Spanish Technical Building Code.Keywords: energy demand, energy performance certificate EPBD, trnsys, buildings
Procedia PDF Downloads 127364 Effect of Ease of Doing Business to Economic Growth among Selected Countries in Asia
Authors: Teodorica G. Ani
Abstract:
Economic activity requires an encouraging regulatory environment and effective rules that are transparent and accessible to all. The World Bank has been publishing the annual Doing Business reports since 2004 to investigate the scope and manner of regulations that enhance business activity and those that constrain it. A streamlined business environment supporting the development of competitive small and medium enterprises (SMEs) may expand employment opportunities and improve the living conditions of low income households. Asia has emerged as one of the most attractive markets in the world. Economies in East Asia and the Pacific were among the most active in making it easier for local firms to do business. The study aimed to describe the ease of doing business and its effect to economic growth among selected economies in Asia for the year 2014. The study covered 29 economies in East Asia, Southeast Asia, South Asia and Middle Asia. Ease of doing business is measured by the Doing Business indicators (DBI) of the World Bank. The indicators cover ten aspects of the ease of doing business such as starting a business, dealing with construction permits, getting electricity, registering property, getting credit, protecting investors, paying taxes, trading across borders, enforcing contracts and resolving insolvency. In the study, Gross Domestic Product (GDP) was used as the proxy variable for economic growth. Descriptive research was the research design used. Graphical analysis was used to describe the income and doing business among selected economies. In addition, multiple regression was used to determine the effect of doing business to economic growth. The study presented the income among selected economies. The graph showed China has the highest income while Maldives produces the lowest and that observation were supported by gathered literatures. The study also presented the status of the ten indicators of doing business among selected economies. The graphs showed varying trends on how easy to start a business, deal with construction permits and to register property. Starting a business is easiest in Singapore followed by Hong Kong. The study found out that the variations in ease of doing business is explained by starting a business, dealing with construction permits and registering property. Moreover, an explanation of the regression result implies that a day increase in the average number of days it takes to complete a procedure will decrease the value of GDP in general. The research proposed inputs to policy which may increase the awareness of local government units of different economies on the simplification of the policies of the different components used in measuring doing business.Keywords: doing business, economic growth, gross domestic product, Asia
Procedia PDF Downloads 380363 Carbon Based Classification of Aquaporin Proteins: A New Proposal
Authors: Parul Johri, Mala Trivedi
Abstract:
Major Intrinsic proteins (MIPs), actively involved in the passive transport of small polar molecules across the membranes of almost all living organisms. MIPs that specifically transport water molecules are named aquaporins (AQPs). The permeability of membranes is actively controlled by the regulation of the amount of different MIPs present but also in some cases by phosphorylation and dephosphorylation of the channel. Based on sequence similarity, MIPs have been classified into many categories. All of the proteins are made up of the 20 amino acids, the only difference is there in their orientations. Again all the 20 amino acids are made up of the basic five elements namely: carbon, hydrogen, oxygen, sulphur and nitrogen. These elements are responsible for giving the amino acids the properties of hydrophilicity/hydrophobicity which play an important role in protein interactions. The hydrophobic amino acids characteristically have greater number of carbon atoms as carbon is the main element which contributes to hydrophobic interactions in proteins. It is observed that the carbon level of proteins in different species is different. In the present work, we have taken a sample set of 150 aquaporins proteins from Uniprot database and a dynamic programming code was written to calculate the carbon percentage for each sequence. This carbon percentage was further used to barcode the aqauporins of animals and plants. The protein taken from Oryza sativa, Zea mays and Arabidopsis thaliana preferred to have carbon percentage of 31.8 to 35, whereas on the other hand sequences taken from Mus musculus, Saccharomyces cerevisiae, Homo sapiens, Bos Taurus, and Rattus norvegicus preferred to have carbon percentage of 31 to 33.7. This clearly demarks the carbon range in the aquaporin proteins from plant and animal origin. Hence the atom level analysis of protein sequences can provide us with better results as compared to the residue level comparison.Keywords: aquaporins, carbon, dynamic prgramming, MIPs
Procedia PDF Downloads 371362 Fundamental Natural Frequency of Chromite Composite Floor System
Authors: Farhad Abbas Gandomkar, Mona Danesh
Abstract:
This paper aims to determine Fundamental Natural Frequency (FNF) of a structural composite floor system known as Chromite. To achieve this purpose, FNFs of studied panels are determined by development of Finite Element Models (FEMs) in ABAQUS program. American Institute of Steel Construction (AISC) code in Steel Design Guide Series 11, presents a fundamental formula to calculate FNF of a steel framed floor system. This formula has been used to verify results of the FEMs. The variability in the FNF of the studied system under various parameters such as dimensions of floor, boundary conditions, rigidity of main and secondary beams around the floor, thickness of concrete slab, height of composite joists, distance between composite joists, thickness of top and bottom flanges of the open web steel joists, and adding tie beam perpendicular on the composite joists, is determined. The results show that changing in dimensions of the system, its boundary conditions, rigidity of main beam, and also adding tie beam, significant changes the FNF of the system up to 452.9%, 50.8%, -52.2%, %52.6%, respectively. In addition, increasing thickness of concrete slab increases the FNF of the system up to 10.8%. Furthermore, the results demonstrate that variation in rigidity of secondary beam, height of composite joist, and distance between composite joists, and thickness of top and bottom flanges of open web steel joists insignificant changes the FNF of the studied system up to -0.02%, -3%, -6.1%, and 0.96%, respectively. Finally, the results of this study help designer predict occurrence of resonance, comfortableness, and design criteria of the studied system.Keywords: Fundamental Natural Frequency, Chromite Composite Floor System, Finite Element Method, low and high frequency floors, Comfortableness, resonance.
Procedia PDF Downloads 457361 Communication Infrastructure Required for a Driver Behaviour Monitoring System, ‘SiaMOTO’ IT Platform
Authors: Dogaru-Ulieru Valentin, Sălișteanu Ioan Corneliu, Ardeleanu Mihăiță Nicolae, Broscăreanu Ștefan, Sălișteanu Bogdan, Mihai Mihail
Abstract:
The SiaMOTO system is a communications and data processing platform for vehicle traffic. The human factor is the most important factor in the generation of this data, as the driver is the one who dictates the trajectory of the vehicle. Like any trajectory, specific parameters refer to position, speed and acceleration. Constant knowledge of these parameters allows complex analyses. Roadways allow many vehicles to travel through their confined space, and the overlapping trajectories of several vehicles increase the likelihood of collision events, known as road accidents. Any such event has causes that lead to its occurrence, so the conditions for its occurrence are known. The human factor is predominant in deciding the trajectory parameters of the vehicle on the road, so monitoring it by knowing the events reported by the DiaMOTO device over time, will generate a guide to target any potentially high-risk driving behavior and reward those who control the driving phenomenon well. In this paper, we have focused on detailing the communication infrastructure of the DiaMOTO device with the traffic data collection server, the infrastructure through which the database that will be used for complex AI/DLM analysis is built. The central element of this description is the data string in CODEC-8 format sent by the DiaMOTO device to the SiaMOTO collection server database. The data presented are specific to a functional infrastructure implemented in an experimental model stage, by installing on a number of 50 vehicles DiaMOTO unique code devices, integrating ADAS and GPS functions, through which vehicle trajectories can be monitored 24 hours a day.Keywords: DiaMOTO, Codec-8, ADAS, GPS, driver monitoring
Procedia PDF Downloads 79360 Frequency of BCR-ABL Fusion Transcript Types with Chronic Myeloid Leukemia by Multiplex Polymerase Chain Reaction in Srinagarind Hospital, Khon Kaen Thailand
Authors: Kanokon Chaicom, Chitima Sirijerachai, Kanchana Chansung, Pinsuda Klangsang, Boonpeng Palaeng, Prajuab Chaimanee, Pimjai Ananta
Abstract:
Chronic myeloid leukemia (CML) is characterized by the consistent involvement of the Philadelphia chromosome (Ph), which is derived from a reciprocal translocation between chromosome 9 and 22, the main product of the t(9;22) (q34;q11) translocation, is found in the leukemic clone of at least 95% of CML patients. There are two major forms of the BCR/ABL fusion gene, involving ABL exon 2, but including different exons of BCR gene. The transcripts b2a2 (e13a2) or b3a2 (e14a2) code for a p210 protein. Another fusion gene leads to the expression of an e1a2 transcript, which codes for a p190 protein. Other less common fusion genes are b3a3 or b2a3, which codes for a p203 protein and e19a2 (c3a2) transcript, which codes for a p230 protein. Its frequency varies in different populations. In this study, we aimed to report the frequency of BCR-ABL fusion transcript types with CML by multiplex PCR (polymerase chain reaction) in Srinagarind Hospital, Khon Kaen, Thailand. Multiplex PCR for BCR-ABL was performed on 58 patients, to detect different types of BCR-ABL transcripts of the t (9; 22). All patients examined were positive for some type of BCR/ABL rearrangement. The majority of the patients (93.10%) expressed one of the p210 BCR-ABL transcripts, b3a2 and b2a2 transcripts were detected in 53.45% and 39.65% respectively. The expression of an e1a2 transcript showed 3.75%. Co-expression of p210/p230 was detected in 3.45%. Co-expression of p210/p190 was not detected. Multiplex PCR is useful, saves time and reliable in the detection of BCR-ABL transcript types. The frequency of one or other rearrangement in CML varies in different population.Keywords: chronic myeloid leukemia, BCR-ABL fusion transcript types, multiplex PCR, frequency of BCR-ABL fusion
Procedia PDF Downloads 244359 Effect of Thermal Radiation and Chemical Reaction on MHD Flow of Blood in Stretching Permeable Vessel
Authors: Binyam Teferi
Abstract:
In this paper, a theoretical analysis of blood flow in the presence of thermal radiation and chemical reaction under the influence of time dependent magnetic field intensity has been studied. The unsteady non linear partial differential equations of blood flow considers time dependent stretching velocity, the energy equation also accounts time dependent temperature of vessel wall, and concentration equation includes time dependent blood concentration. The governing non linear partial differential equations of motion, energy, and concentration are converted into ordinary differential equations using similarity transformations solved numerically by applying ode45. MATLAB code is used to analyze theoretical facts. The effect of physical parameters viz., permeability parameter, unsteadiness parameter, Prandtl number, Hartmann number, thermal radiation parameter, chemical reaction parameter, and Schmidt number on flow variables viz., velocity of blood flow in the vessel, temperature and concentration of blood has been analyzed and discussed graphically. From the simulation study, the following important results are obtained: velocity of blood flow increases with both increment of permeability and unsteadiness parameter. Temperature of the blood increases in vessel wall as Prandtl number and Hartmann number increases. Concentration of the blood decreases as time dependent chemical reaction parameter and Schmidt number increases.Keywords: stretching velocity, similarity transformations, time dependent magnetic field intensity, thermal radiation, chemical reaction
Procedia PDF Downloads 93358 Plasma Ion Implantation Study: A Comparison between Tungsten and Tantalum as Plasma Facing Components
Authors: Tahreem Yousaf, Michael P. Bradley, Jerzy A. Szpunar
Abstract:
Currently, nuclear fusion is considered one of the most favorable options for future energy generation, due both to its abundant fuel and lack of emissions. For fusion power reactors, a major problem will be a suitable material choice for the Plasma Facing Components (PFCs) which will constitute the reactor first wall. Tungsten (W) has advantages as a PFC material because of its high melting point, low vapour pressure, high thermal conductivity and low retention of hydrogen isotopes. However, several adverse effects such as embrittlement, melting and morphological evolution have been observed in W when it is bombarded by low-energy and high-fluence helium (He) and deuterium (D) ions, as a simulation conditions adjacent to a fusion plasma. Recently, tantalum (Ta) also investigate as PFC and show better reluctance to nanostructure fuzz as compared to W under simulated fusion plasma conditions. But retention of D ions found high in Ta than W. Preparatory to plasma-based ion implantation studies, the effect of D and He ion impact on W and Ta is predicted by using the stopping and range of ions in the matter (SRIM) code. SRIM provided some theoretical results regarding projected range, ion concentration (at. %) and displacement damage (dpa) in W and Ta. The projected range for W under Irradiation of He and D ions with an energy of 3-keV and 1×fluence is determined 75Å and 135 Å and for Ta 85Å and 155Å, respectively. For both W and Ta samples, the maximum implanted peak for helium is predicted ~ 5.3 at. % at 12 nm and for De ions concentration peak is located near 3.1 at. % at 25 nm. For the same parameters, the displacement damage for He ions is observed in W ~ 0.65 dpa and Ta ~ 0.35 dpa at 5 nm. For D ions the displacement damage for W ~ 0.20 dpa at 8 nm and Ta ~ 0.175 dpa at 7 nm. The mean implantation depth is same for W and Ta, i.e. for He ions ~ 40 nm and D ions ~ 70 nm. From these results, we conclude that retention of D is high than He ions, but damage is low for Ta as compared to W. Further investigation still in progress regarding W and T.Keywords: helium and deuterium ion impact, plasma facing components, SRIM simulation, tungsten, tantalum
Procedia PDF Downloads 131357 Security Report Profiling for Mobile Banking Applications in Indonesia Based on OWASP Mobile Top 10-2016
Authors: Bambang Novianto, Rizal Aditya Herdianto, Raphael Bianco Huwae, Afifah, Alfonso Brolin Sihite, Rudi Lumanto
Abstract:
The mobile banking application is a type of mobile application that is growing rapidly. This is caused by the ease of service and time savings in making transactions. On the other hand, this certainly provides a challenge in security issues. The use of mobile banking can not be separated from cyberattacks that may occur which can result the theft of sensitive information or financial loss. The financial loss and the theft of sensitive information is the most avoided thing because besides harming the user, it can also cause a loss of customer trust in a bank. Cyberattacks that are often carried out against mobile applications are phishing, hacking, theft, misuse of data, etc. Cyberattack can occur when a vulnerability is successfully exploited. OWASP mobile Top 10 has recorded as many as 10 vulnerabilities that are most commonly found in mobile applications. In the others, android permissions also have the potential to cause vulnerabilities. Therefore, an overview of the profile of the mobile banking application becomes an urgency that needs to be known. So that it is expected to be a consideration of the parties involved for improving security. In this study, an experiment has been conducted to capture the profile of the mobile banking applications in Indonesia based on android permission and OWASP mobile top 10 2016. The results show that there are six basic vulnerabilities based on OWASP Mobile Top 10 that are most commonly found in mobile banking applications in Indonesia, i.e. M1:Improper Platform Usage, M2:Insecure Data Storage, M3:Insecure Communication, M5:Insufficient Cryptography, M7:Client Code Quality, and M9:Reverse Engineering. The most permitted android permissions are the internet, status network access, and telephone read status.Keywords: mobile banking application, OWASP mobile top 10 2016, android permission, sensitive information, financial loss
Procedia PDF Downloads 142356 Ethnolinguistic Otherness: The Vedda Language (Baasapojja) of Indigenous Adivasi (Veddas) of Dambana in Sri Lanka
Authors: Nimasha Malalasekera
Abstract:
Working with the indigenous Adivasi (Vedda) community of Dambana in the district of Badulla in Sri Lanka, this research documents linguistic data to address language and cultural endangerment. The ancestral language of Adivasi has undergone sustained restructuration over a long historical period due to its contact with Sinhala, an Indo-Aryan language spoken by the majority Sinhalese. The Vedda language is highly endangered today. At present, all speakers of the Vedda language spoken in Dambana are Adivasi men in the parent generation, who are Sinhala-Vedda bilinguals. Adivasi women and children do not speak the Vedda language but Sinhala in everyday life. Women can understand the Vedda language and would respond to a Vedda language utterance in Sinhala. The use of the Vedda language is largely restricted to self-ascribing Adivasi men who employ it in the context of cultural tourism in Dambana to index ethnolinguistic otherness. Adivasi of Dambana often refers to this distinct linguistic code that they speak as baasapojja or language. This research employs a cooperative model of ethnographic documentation to explore the interrelations between discursive practices, linguistic structures, and linguistic (and broader sociocultural) ideologies in this community. The Vedda language has been previously identified as a dialect of Sinhala or a creole emerging in the contact between Sinhala and the ancestral Vedda language. This paper analyzes the current language endangerment context of bilingual Adivasi members that allows the birth of a mixed language. The aim of this research is to preserve ongoing linguistic innovation among this endangered language speech community. It contributes to the appreciation of creative cultural and linguistic production of a stigmatized minuscule indigenous community of South Asia that strives to assert a distinct linguistic and cultural identity from the dominant populations.Keywords: Vedda language, language endangerment, mixed languages, indigenous identity
Procedia PDF Downloads 105355 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty
Authors: D. S. Gomes, A. T. Silva
Abstract:
Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation
Procedia PDF Downloads 292354 An Analysis of Humanitarian Data Management of Polish Non-Governmental Organizations in Ukraine Since February 2022 and Its Relevance for Ukrainian Humanitarian Data Ecosystem
Authors: Renata Kurpiewska-Korbut
Abstract:
Making an assumption that the use and sharing of data generated in humanitarian action constitute a core function of humanitarian organizations, the paper analyzes the position of the largest Polish humanitarian non-governmental organizations in the humanitarian data ecosystem in Ukraine and their approach to non-personal and personal data management since February of 2022. Both expert interviews and document analysis of non-profit organizations providing a direct response in the Ukrainian crisis context, i.e., the Polish Humanitarian Action, Caritas, Polish Medical Mission, Polish Red Cross, and the Polish Center for International Aid and the applicability of theoretical perspective of contingency theory – with its central point that the context or specific set of conditions determining the way of behavior and the choice of methods of action – help to examine the significance of data complexity and adaptive approach to data management by relief organizations in the humanitarian supply chain network. The purpose of this study is to determine how the existence of well-established and accurate internal procedures and good practices of using and sharing data (including safeguards for sensitive data) by the surveyed organizations with comparable human and technological capabilities are implemented and adjusted to Ukrainian humanitarian settings and data infrastructure. The study also poses a fundamental question of whether this crisis experience will have a determining effect on their future performance. The obtained finding indicate that Polish humanitarian organizations in Ukraine, which have their own unique code of conduct and effective managerial data practices determined by contingencies, have limited influence on improving the situational awareness of other assistance providers in the data ecosystem despite their attempts to undertake interagency work in the area of data sharing.Keywords: humanitarian data ecosystem, humanitarian data management, polish NGOs, Ukraine
Procedia PDF Downloads 93353 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 14352 Comparison of FNTD and OSLD Detectors' Responses to Light Ion Beams Using Monte Carlo Simulations and Exprimental Data
Authors: M. R. Akbari, H. Yousefnia, A. Ghasemi
Abstract:
Al2O3:C,Mg fluorescent nuclear track detector (FNTD) and Al2O3:C optically stimulated luminescence detector (OSLD) are becoming two of the applied detectors in ion dosimetry. Therefore, the response of these detectors to hadron beams is highly of interest in radiation therapy (RT) using ion beams. In this study, these detectors' responses to proton and Helium-4 ion beams were compared using Monte Carlo simulations. The calculated data for proton beams were compared with Markus ionization chamber (IC) measurement (in water phantom) from M.D. Anderson proton therapy center. Monte Carlo simulations were performed via the FLUKA code (version 2011.2-17). The detectors were modeled in cylindrical shape at various depths of the water phantom without shading each other for obtaining relative depth dose in the phantom. Mono-energetic parallel ion beams in different incident energies (100 MeV/n to 250 MeV/n) were collided perpendicularly on the phantom surface. For proton beams, the results showed that the simulated detectors have over response relative to IC measurements in water phantom. In all cases, there were good agreements between simulated ion ranges in the water with calculated and experimental results reported by the literature. For proton, maximum peak to entrance dose ratio in the simulated water phantom was 4.3 compared with about 3 obtained from IC measurements. For He-4 ion beams, maximum peak to entrance ratio calculated by both detectors was less than 3.6 in all energies. Generally, it can be said that FLUKA is a good tool to calculate Al2O3:C,Mg FNTD and Al2O3:C OSLD detectors responses to therapeutic proton and He-4 ion beams. It can also calculate proton and He-4 ion ranges with a reasonable accuracy.Keywords: comparison, FNTD and OSLD detectors response, light ion beams, Monte Carlo simulations
Procedia PDF Downloads 343351 English Language Proficiency and Use as Determinants of Transactional Success in Gbagi Market, Ibadan, Nigeria
Authors: A. Robbin
Abstract:
Language selection can be an efficient negotiation strategy employed by both service or product providers and their customers to achieve transactional success. The transactional scenario in Gbagi Market, Ibadan, Nigeria provides an appropriate setting for the exploration of the Nigerian multilingual situation with its own interesting linguistic peculiarities which questions the functionality of the ‘Lingua Franca’ in trade situations. This study examined English Language proficiency among Yoruba Traders in Gbagi Market, Ibadan and its use as determinants of transactional success during service encounters. Randomly selected Yoruba-English bilingual traders and customers were administered questionnaires and the data subjected to statistical and descriptive analysis using Giles Communication Accommodation Theory. Findings reveal that only fifty percent of the traders used for the study were proficient in speaking English language. Traders with minimal proficiency in Standard English, however, resulted in the use of the Nigerian Pidgin English. Both traders and customers select the Mother Tongue, which is the Yoruba Language during service encounters but are quick to converge to the other’s preferred language as the transactional exchange demands. The English language selection is not so much for the prestige or lingua franca status of the language as it is for its functions, which include ease of communication, negotiation, and increased sales. The use of English during service encounters is mostly determined by customer’s linguistic preference which the trader accommodates to for better negotiation and never as a first choice. This convergence is found to be beneficial as it ensures sales and return patronage. Although the English language is not a preferred code choice in Gbagi Market, it serves a functional trade strategy for transactional success during service encounters in the market.Keywords: communication accommodation theory, language selection, proficiency, service encounter, transaction
Procedia PDF Downloads 159350 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels
Authors: Foad Hassaninejadafarahani, Scott Ormiston
Abstract:
Reflux condensation occurs in a vertical channels and tubes when there is an upward core flow of vapor (or gas-vapor mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapor-gas mixture (or pure vapor) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapor core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on a finite volume method and a co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and pressure profiles, as well as axial variations of film thickness, Nusselt number and interface gas mass fraction.Keywords: Reflux, Condensation, CFD-Two Phase, Nusselt number
Procedia PDF Downloads 364349 Interrogating Western Political Perspectives of Social Justice in Canadian Social Work
Authors: Samantha Clarke
Abstract:
The term social justice is central to social work; however, the meaning behind this term is not as simple as defining the term itself. This is because the meaning of social justice is relative since its origin and development is based on evolving political perspectives. Political perspectives provide numerous lenses to view social justice in social work; however, the realities of changing society have meant that social justice has assumed different values, definitions, and understandings over time and in different geopolitical and cultural contexts. There are many competing and convincing theories of social justice that are relevant to social work practice. Exploring the term is not an idle preoccupation because the meaning of the term is not as crucial as the meaning of the worldview, as it is the worldview that positions social justice as crucial in the emancipation of people marginalized from oppression. The many political assumptions that underlie the term social justice are explored and connected to the contemporary discussions about social justice in social work. These connections are then interrogated in the Canadian Social Works Code of Ethics, and in micro, mezzo, and macro approaches. To be remiss in interrogating the underlying political assumptions of the worldview of social justice is to entrench oppression and to preserve oppressive structures in contemporary Canadian social work. The concept of social justice is unable to withstand closer scrutiny about its emancipatory qualities in Canadian social work when we interrogate the many political assumptions that frame its understanding. In order to authenticate social justice as an emancipatory central organizing principle, Canadian social workers must engage in deeper discussions about the political implications of social justice in their everyday practices based on diverse worldviews and geopolitical contexts. Social workers are well positioned to develop an understanding of social justice that is emancipatory based on their everyday practices because as social and political actors they are positioned to work for and with individuals and toward the greater good of those who are marginalized from oppression.Keywords: Canadian social work, political analysis, social justice, social work practice
Procedia PDF Downloads 184348 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots
Authors: Mrinalini Ranjan, Sudheesh Chethil
Abstract:
Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots
Procedia PDF Downloads 178347 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 104346 'Get the DNR': Exploring the Impact of an Educational eModule on Internal Medicine Residents' Attitudes and Approaches to Goals of Care Conversations
Authors: Leora Branfield Day, Stephanie Saunders, Leah Steinberg, Shiphra Ginsburg, Christine Soong
Abstract:
Introduction: Discordance between patients expressed and documented preferences at the end of life is common. Although junior trainees frequently lead goals of care (GOC) conversations, lack of training can result in poor communication. Based on a needs assessment, we developed an interactive electronic learning module (eModule) for conducting patient-centred GOC discussions. The purpose of this study was to evaluate the impact of the eModule on residents’ attitudes towards GOC conversations. Methods: First-year internal medicine residents (n=11) from the University of Toronto selected using purposive sampling underwent semi-structured interviews before and after completing a GOC eModule. Interviews were anonymized, transcribed and open-coded using NVivo. Using a constructivist grounded theory approach, we developed a framework to understand the attitudes of residents to GOC conversations before and after viewing the module. Results: Before the module, participants described limited training and negative emotions towards GOC conversations. Many focused on code status and procedure choices (e.g., ventilation) instead of eliciting patient-centered values. Pressure to “get the DNR" led to conflicting feelings and distress. After the module, participants’ approached conversations with a greater focus on patient values and process. They felt more prepared and comfortable, recognizing the complexity of conversations and the importance of patient-centeredness. Conclusions: A novel GOC eModule allowed residents to develop a patient-centered and standardized approach to GOC conversations while improving confidence and preparedness. This resource could be an effective strategy toward attaining a critical communication competency among learners with the potential to enhance accurate GOC documentation.Keywords: goals of care conversations, communication skills, emodule, medical education
Procedia PDF Downloads 137345 The Untold Story of the Importance of ‘Insignia Imprinted’ for the Heritage Clay Roof Tiles in Malaysia
Authors: M. S. Sulaiman, N. Hassan, M. A. Aziz, M. S. A. Haron, J. H. A. Halim
Abstract:
The classic profile of heritage clay roof tiles gives unique characteristics and timeless style to the almost historical building. It is not only designed to meet basic construction needs, offering great performance and durability but also highlights unnoticed stamp impressions, known as ‘insignia imprinted.’ It seems that the insignia imprinted is not significant to all stakeholders, especially in preserving heritage clay roof tiles in Malaysia. They are not even realized the existence and importance of that element, where it represents the cognitive and social character of that particular era. It creates a sense of belongings for the manufacturers regarding their most elementary features, such as a fortress, crown, fauna and etc. This research aims to identify and analyze the late stamp marks on heritage interlocking clay roof tiles in a government heritage building in Malaysia. The methodology used is literature reviews (desktop study), observation on sites, and interviews. Initial findings from the preliminary observation on-site in Peninsular Malaysia show some evidence that the stamp marks appear on the front and back sides of the tile that indicates the year, manufacturer, code numbers, and logos. Almost more than 30 samples of different types of stamp marks were found and collected. Some of which had been described Guichard & Carvin Cie Marsielle St Andre France, Pierre Sacoman St Henry Marsielle, Tuileries Aixoises Les Milles B.D.R France, The Calicut Tile Co Feroke, And B. Pinto & Co Mangalore dated 1865, 1919 and 1936. In view of this abundance of materials, it will lead to the establishment of a comprehensive database consisting of detailed specifications and material performance for future conservation works and maintenance purposes that will sustain for future references.Keywords: clay roof tiles, insignia imprinted, interlocking, stamp mark
Procedia PDF Downloads 72344 Expert System: Debugging Using MD5 Process Firewall
Authors: C. U. Om Kumar, S. Kishore, A. Geetha
Abstract:
An Operating system (OS) is software that manages computer hardware and software resources by providing services to computer programs. One of the important user expectations of the operating system is to provide the practice of defending information from unauthorized access, disclosure, modification, inspection, recording or destruction. Operating system is always vulnerable to the attacks of malwares such as computer virus, worm, Trojan horse, backdoors, ransomware, spyware, adware, scareware and more. And so the anti-virus software were created for ensuring security against the prominent computer viruses by applying a dictionary based approach. The anti-virus programs are not always guaranteed to provide security against the new viruses proliferating every day. To clarify this issue and to secure the computer system, our proposed expert system concentrates on authorizing the processes as wanted and unwanted by the administrator for execution. The Expert system maintains a database which consists of hash code of the processes which are to be allowed. These hash codes are generated using MD5 message-digest algorithm which is a widely used cryptographic hash function. The administrator approves the wanted processes that are to be executed in the client in a Local Area Network by implementing Client-Server architecture and only the processes that match with the processes in the database table will be executed by which many malicious processes are restricted from infecting the operating system. The add-on advantage of this proposed Expert system is that it limits CPU usage and minimizes resource utilization. Thus data and information security is ensured by our system along with increased performance of the operating system.Keywords: virus, worm, Trojan horse, back doors, Ransomware, Spyware, Adware, Scareware, sticky software, process table, MD5, CPU usage and resource utilization
Procedia PDF Downloads 428343 A New Formulation Of The M And M-theta Integrals Generalized For Virtual Crack Closure In A Three-dimensional Medium
Authors: Loïc Chrislin Nguedjio, S. Jerome Afoutou, Rostand Moutou Pitti, Benoit Blaysat, Frédéric Dubois, Naman Recho, Pierre Kisito Talla
Abstract:
The safety and durability of structures remain challenging fields that continue to draw the attention of designers. One widely adopted approach is fracture mechanics, which provides methods to evaluate crack stability in complex geometries and under diverse loading conditions. The global energy approach is particularly comprehensive, as it calculates the energy release rate required for crack initiation and propagation using path-independent integrals. This study aims to extend these invariant integrals to include path-independent integrals, with the goal of enhancing the accuracy of failure predictions. The ultimate objective is to create more robust materials while optimizing structural safety and durability. By integrating the real and virtual field method with the virtual crack closure technique, a new formulation of the M-integral is introduced. This formulation establishes a direct relationship between local stresses on the crack faces and the opening displacements, allowing for an accurate calculation of fracture energy. The analytical calculations are grounded in the assumption that the energy needed to close a crack virtually is equal to the energy released during its opening. This novel integral is implemented in a finite element code using Cast3M to simulate cracking criteria within a wood material context. Initially, the numerical calculations are focused on plane strain conditions, but they are later extended to three-dimensional environments, taking into account the orthotropic nature of wood.Keywords: energy release rate, path-independent integrals, virtual crack closure, orthotropic material
Procedia PDF Downloads 13342 Direct Approach in Modeling Particle Breakage Using Discrete Element Method
Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang
Abstract:
Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.Keywords: particle bed, breakage models, breakage kinetic, discrete element method
Procedia PDF Downloads 199341 Recent Developments and Expectations in the Legal Expenses Insurance in Turkey
Authors: İbrahim Arslan, Mücahit Ünal
Abstract:
An important issue to ensure justice is to simplify the right to seek justice. But there is a cost of seeking justice in civil law. It costs at least, attorneys' fees and judicial expenses during the beginning and in case of losing a trial. Indeed, most of the people refrain from seeking justice because of these expenses. Therefore, it is not inappropriate to say that the removal of obstacles staying on the way of seeking justice will increase the belief in justice. Legal expenses insurance is a private law contract of insurance in which the insurer is obliged to pay premiums of the insured, to provide the necessary services for the protection of legal interests of the insured person within the agreed scope. This type of insurance is being practiced in the Western world for a long time. The special rights, duties and obligations of the parties to a legal expenses insurance contract shall be governed by the Turkish Commercial Code (TCC) and the contractual agreements which are regularly closed in the form of general terms and conditions. If the number of the legal expenses insurance contracts concluded increase this will definitely improve the percentage of seeking justice before the courts. The general terms and conditions applicable in Turkey generally include litigation costs, referee fees, guarantee fund , enforcement costs , appeal costs borne decision corrections costs. In addition, besides the insured, other family members or the people specified in the policy are protected in the scope of personal/family legal expenses insurance. The commercial law disputes fall outside the scope of coverage in this insurance branch. The insured person chooses his own lawyer and the insurer is not allowed to give advice during the selection of a lawyer. In April 2015, the Prime Minister announced of a new era in the field of legal expenses insurance in Turkey and this announcement excited the insurance industry and legal community.Keywords: insurance, in the Turkish law on legal protection insurance, legal protection insurance, legal protection
Procedia PDF Downloads 359340 Comparison of Allowable Stress Method and Time History Response Analysis for Seismic Design of Buildings
Authors: Sayuri Inoue, Naohiro Nakamura, Tsubasa Hamada
Abstract:
The seismic design method of buildings is classified into two types: static design and dynamic design. The static design is a design method that exerts static force as seismic force and is a relatively simple design method created based on the experience of seismic motion in the past 100 years. At present, static design is used for most of the Japanese buildings. Dynamic design mainly refers to the time history response analysis. It is a comparatively difficult design method that input the earthquake motion assumed in the building model and examine the response. Currently, it is only used for skyscrapers and specific buildings. In the present design standard in Japan, it is good to use either the design method of the static design and the dynamic design in the medium and high-rise buildings. However, when actually designing middle and high-rise buildings by two kinds of design methods, the relatively simple static design method satisfies the criteria, but in the case of a little difficult dynamic design method, the criterion isn't often satisfied. This is because the dynamic design method was built with the intention of designing super high-rise buildings. In short, higher safety is required as compared with general buildings, and criteria become stricter. The authors consider applying the dynamic design method to general buildings designed by the static design method so far. The reason is that application of the dynamic design method is reasonable for buildings that are out of the conventional standard structural form such as emphasizing design. For the purpose, it is important to compare the design results when the criteria of both design methods are arranged side by side. In this study, we performed time history response analysis to medium-rise buildings that were actually designed with allowable stress method. Quantitative comparison between static design and dynamic design was conducted, and characteristics of both design methods were examined.Keywords: buildings, seismic design, allowable stress design, time history response analysis, Japanese seismic code
Procedia PDF Downloads 157339 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 130