Search results for: multi pass weld
2363 An Agent-Based Approach to Examine Interactions of Firms for Investment Revival
Authors: Ichiro Takahashi
Abstract:
One conundrum that macroeconomic theory faces is to explain how an economy can revive from depression, in which the aggregate demand has fallen substantially below its productive capacity. This paper examines an autonomous stabilizing mechanism using an agent-based Wicksell-Keynes macroeconomic model. This paper focuses on the effects of the number of firms and the length of the gestation period for investment that are often assumed to be one in a mainstream macroeconomic model. The simulations found the virtual economy was highly unstable, or more precisely, collapsing when these parameters are fixed at one. This finding may even suggest us to question the legitimacy of these common assumptions. A perpetual decline in capital stock will eventually encourage investment if the capital stock is short-lived because an inactive investment will result in insufficient productive capacity. However, for an economy characterized by a roundabout production method, a gradual decline in productive capacity may not be able to fall below the aggregate demand that is also shrinking. Naturally, one would then ask if our economy cannot rely on an external stimulus such as population growth and technological progress to revive investment, what factors would provide such a buoyancy for stimulating investments? The current paper attempts to answer this question by employing the artificial macroeconomic model mentioned above. The baseline model has the following three features: (1) the multi-period gestation for investment, (2) a large number of heterogeneous firms, (3) demand-constrained firms. The instability is a consequence of the following dynamic interactions. (a) A multiple-period gestation period means that once a firm starts a new investment, it continues to invest over some subsequent periods. During these gestation periods, the excess demand created by the investing firm will spill over to ignite new investment of other firms that are supplying investment goods: the presence of multi-period gestation for investment provides a field for investment interactions. Conversely, the excess demand for investment goods tends to fade away before it develops into a full-fledged boom if the gestation period of investment is short. (b) A strong demand in the goods market tends to raise the price level, thereby lowering real wages. This reduction of real wages creates two opposing effects on the aggregate demand through the following two channels: (1) a reduction in the real labor income, and (2) an increase in the labor demand due to the principle of equality between the marginal labor productivity and real wage (referred as the Walrasian labor demand). If there is only a single firm, a lower real wage will increase its Walrasian labor demand, thereby an actual labor demand tends to be determined by the derived labor demand. Thus, the second positive effect would not work effectively. In contrast, for an economy with a large number of firms, Walrasian firms will increase employment. This interaction among heterogeneous firms is a key for stability. A single firm cannot expect the benefit of such an increased aggregate demand from other firms.Keywords: agent-based macroeconomic model, business cycle, demand constraint, gestation period, representative agent model, stability
Procedia PDF Downloads 1622362 Gathering Space after Disaster: Understanding the Communicative and Collective Dimensions of Resilience through Field Research across Time in Hurricane Impacted Regions of the United States
Authors: Jack L. Harris, Marya L. Doerfel, Hyunsook Youn, Minkyung Kim, Kautuki Sunil Jariwala
Abstract:
Organizational resilience refers to the ability to sustain business or general work functioning despite wide-scale interruptions. We focus on organization and businesses as a pillar of their communities and how they attempt to sustain work when a natural disaster impacts their surrounding regions and economies. While it may be more common to think of resilience as a trait possessed by an organization, an emerging area of research recognizes that for organizations and businesses, resilience is a set of processes that are constituted through communication, social networks, and organizing. Indeed, five processes, robustness, rapidity, resourcefulness, redundancy, and external availability through social media have been identified as critical to organizational resilience. These organizing mechanisms involve multi-level coordination, where individuals intersect with groups, organizations, and communities. Because the nature of such interactions are often networks of people and organizations coordinating material resources, information, and support, they necessarily require some way to coordinate despite being displaced. Little is known, however, if physical and digital spaces can substitute one for the other. We thus are guided by the question, is digital space sufficient when disaster creates a scarcity of physical space? This study presents a cross-case comparison based on field research from four different regions of the United States that were impacted by Hurricanes Katrina (2005), Sandy (2012), Maria (2017), and Harvey (2017). These four cases are used to extend the science of resilience by examining multi-level processes enacted by individuals, communities, and organizations that together, contribute to the resilience of disaster-struck organizations, businesses, and their communities. Using field research about organizations and businesses impacted by the four hurricanes, we code data from interviews, participant observations, field notes, and document analysis drawn from New Orleans (post-Katrina), coastal New Jersey (post-Sandy), Houston Texas (post-Harvey), and the lower keys of Florida (post-Maria). This paper identifies an additional organizing mechanism, networked gathering spaces, where citizens and organizations, alike, coordinate and facilitate information sharing, material resource distribution, and social support. Findings show that digital space, alone, is not a sufficient substitute to effectively sustain organizational resilience during a disaster. Because the data are qualitative, we expand on this finding with specific ways in which organizations and the people who lead them worked around the problem of scarce space. We propose that gatherings after disaster are a sixth mechanism that contributes to organizational resilience.Keywords: communication, coordination, disaster management, information and communication technologies, interorganizational relationships, resilience, work
Procedia PDF Downloads 1712361 Necessity for a Standardized Occupational Health and Safety Management System: An Exploratory Study from the Danish Offshore Wind Sector
Authors: Dewan Ahsan
Abstract:
Denmark is well ahead in generating electricity from renewable sources. The offshore wind sector is playing the pivotal role to achieve this target. Though there is a rapid growth of offshore wind sector in Denmark, still there is a dearth of synchronization in OHS (occupational health and safety) regulation and standards. Therefore, this paper attempts to ascertain: i) what are the major challenges of the company specific OHS standards? ii) why does the offshore wind industry need a standardized OHS management system? and iii) who can play the key role in this process? To achieve these objectives, this research applies the interview and survey techniques. This study has identified several key challenges in OHS management system which are; gaps in coordination and communication among the stakeholders, gaps in incident reporting systems, absence of a harmonized OHS standard and blame culture. Furthermore, this research has identified eleven key stakeholders who are actively involve with the offshore wind business in Denmark. As noticed, the relationships among these stakeholders are very complex specially between operators and sub-contractors. The respondent technicians are concerned with the compliance of various third-party OHS standards (e.g. ISO 31000, ISO 29400, Good practice guidelines by G+) which are applying by various offshore companies. On top of these standards, operators also impose their own OHS standards. From the technicians point of angle, many of these standards are not even specific for the offshore wind sector. So, it is a big challenge for the technicians and sub-contractors to comply with different company specific standards which also elevate the price of their services offer to the operators. For instance, when a sub-contractor is competing for a bidding, it must fulfill a number of OHS requirements (which demands many extra documantions) set by the individual operator and/the turbine supplier. According to sub-contractors’ point of view these extra works consume too much time to prepare the bidding documents and they also need to train their employees to pass the specific OHS certification courses to accomplish the demand for individual clients and individual project. The sub-contractors argued that in many cases these extra documentations and OHS certificates are inessential to ensure the quality service. So, a standardized OHS management procedure (which could be applicable for all the clients) can easily solve this problem. In conclusion, this study highlights that i) development of a harmonized OHS standard applicable for all the operators and turbine suppliers, ii) encouragement of technicians’ active participation in the OHS management, iii) development of a good safety leadership, and, iv) sharing of experiences among the stakeholders (specially operators-operators-sub contractors) are the most vital strategies to overcome the existing challenges and to achieve the goal of 'zero accident/harm' in the offshore wind industry.Keywords: green energy, offshore, safety, Denmark
Procedia PDF Downloads 2142360 Adaptive Dehazing Using Fusion Strategy
Authors: M. Ramesh Kanthan, S. Naga Nandini Sujatha
Abstract:
The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples.Keywords: single image, fusion, dehazing, multi-scale fusion, per-pixel, weight map
Procedia PDF Downloads 4642359 How to Perform Proper Indexing?
Authors: Watheq Mansour, Waleed Bin Owais, Mohammad Basheer Kotit, Khaled Khan
Abstract:
Efficient query processing is one of the utmost requisites in any business environment to satisfy consumer needs. This paper investigates the various types of indexing models, viz. primary, secondary, and multi-level. The investigation is done under the ambit of various types of queries to which each indexing model performs with efficacy. This study also discusses the inherent advantages and disadvantages of each indexing model and how indexing models can be chosen based on a particular environment. This paper also draws parallels between various indexing models and provides recommendations that would help a Database administrator to zero-in on a particular indexing model attributed to the needs and requirements of the production environment. In addition, to satisfy industry and consumer needs attributed to the colossal data generation nowadays, this study has proposed two novel indexing techniques that can be used to index highly unstructured and structured Big Data with efficacy. The study also briefly discusses some best practices that the industry should follow in order to choose an indexing model that is apposite to their prerequisites and requirements.Keywords: indexing, hashing, latent semantic indexing, B-tree
Procedia PDF Downloads 1562358 Sum Capacity with Regularized Channel Inversion in Multi-Antenna Downlink Systems under Equal Power Constraint
Authors: Attaullah Khawaja, Amna Shabbir
Abstract:
Channel inversion is one of the simplest techniques for multiuser downlink systems with single-antenna users. In this paper regularized channel inversion under equal power constraint in the multiuser multiple input multiple output (MU-MIMO) broadcast channels has been considered. Sum capacity with plain channel inversion also known as Zero Forcing Beam Forming (ZFBF) and optimum sum capacity using Dirty Paper Coding (DPC) has also been investigated. Analysis and simulations show that regularization enhances the system performance and empower linear growth in Sum Capacity and specially work well at low signal to noise ratio (SNRs) regime.Keywords: broadcast channel, channel inversion, multiple antenna multiple-user wireless, multiple-input multiple-output (MIMO), regularization, dirty paper coding (DPC), sum capacity
Procedia PDF Downloads 5272357 The Multi-Lingual Acquisition Patterns of Elementary, High School and College Students in Angeles City, Philippines
Authors: Dennis Infante, Leonora Yambao
Abstract:
The Philippines is a multilingual community. A Filipino learns at least three languages throughout his lifespan. Since languages are learned and picked up simultaneously in the environment, a student naturally develops a language system that combines features of at least three languages: the local language, English and Filipino. This study seeks to investigate this particular phenomenon and aspires to propose a theoretical framework of unique language acquisition in the elementary, high school and college in the three languages spoken and used in media, community, business and school: Kapampangan, the local language; Filipino, the national language; and English. The study randomly selects five students from three participating schools in order to acquire language samples. The samples were analyzed in the subsentential, sentential and suprasentential levels using grammatical theories. The data are classified to map out the pattern of substitution or shifting from one language to another.Keywords: language acquisition, mother tongue, multiculturalism, multilingual education
Procedia PDF Downloads 3802356 The Effects of Stoke's Drag, Electrostatic Force and Charge on Penetration of Nanoparticles through N95 Respirators
Authors: Jacob Schwartz, Maxim Durach, Aniruddha Mitra, Abbas Rashidi, Glen Sage, Atin Adhikari
Abstract:
NIOSH (National Institute for Occupational Safety and Health) approved N95 respirators are commonly used by workers in construction sites where there is a large amount of dust being produced from sawing, grinding, blasting, welding, etc., both electrostatically charged and not. A significant portion of airborne particles in construction sites could be nanoparticles created beside coarse particles. The penetration of the particles through the masks may differ depending on the size and charge of the individual particle. In field experiments relevant to this current study, we found that nanoparticles of medium size ranges are penetrating more frequently than nanoparticles of smaller and larger sizes. For example, penetration percentages of nanoparticles of 11.5 – 27.4 nm into a sealed N95 respirator on a manikin head ranged from 0.59 to 6.59%, whereas nanoparticles of 36.5 – 86.6 nm ranged from 7.34 to 16.04%. The possible causes behind this increased penetration of mid-size nanoparticles through mask filters are not yet explored. The objective of this study is to identify causes behind this unusual behavior of mid-size nanoparticles. We have considered such physical factors as Boltzmann distribution of the particles in thermal equilibrium with the air, kinetic energy of the particles at impact on the mask, Stoke’s drag force, and electrostatic forces in the mask stopping the particles. When the particles collide with the mask, only the particles that have enough kinetic energy to overcome the energy loss due to the electrostatic forces and the Stokes’ drag in the mask can pass through the mask. To understand this process, the following assumptions were made: (1) the effect of Stoke’s drag depends on the particles’ velocity at entry into the mask; (2) the electrostatic force is proportional to the charge on the particles, which in turn is proportional to the surface area of the particles; (3) the general dependence on electrostatic charge and thickness means that for stronger electrostatic resistance in the masks and thicker the masks’ fiber layers the penetration of particles is reduced, which is a sensible conclusion. In sampling situations where one mask was soaked in alcohol eliminating electrostatic interaction the penetration was much larger in the mid-range than the same mask with electrostatic interaction. The smaller nanoparticles showed almost zero penetration most likely because of the small kinetic energy, while the larger sized nanoparticles showed almost negligible penetration most likely due to the interaction of the particle with its own drag force. If there is no electrostatic force the fraction for larger particles grows. But if the electrostatic force is added the fraction for larger particles goes down, so diminished penetration for larger particles should be due to increased electrostatic repulsion, may be due to increased surface area and therefore larger charge on average. We have also explored the effect of ambient temperature on nanoparticle penetrations and determined that the dependence of the penetration of particles on the temperature is weak in the range of temperatures in the measurements 37-42°C, since the factor changes in the range from 3.17 10-3K-1 to 3.22 10-3K-1.Keywords: respiratory protection, industrial hygiene, aerosol, electrostatic force
Procedia PDF Downloads 1942355 Artificial Intelligence Methods in Estimating the Minimum Miscibility Pressure Required for Gas Flooding
Authors: Emad A. Mohammed
Abstract:
Utilizing the capabilities of Data Mining and Artificial Intelligence in the prediction of the minimum miscibility pressure (MMP) required for multi-contact miscible (MCM) displacement of reservoir petroleum by hydrocarbon gas flooding using Fuzzy Logic models and Artificial Neural Network models will help a lot in giving accurate results. The factors affecting the (MMP) as it is proved from the literature and from the dataset are as follows: XC2-6: Intermediate composition in the oil-containing C2-6, CO2 and H2S, in mole %, XC1: Amount of methane in the oil (%),T: Temperature (°C), MwC7+: Molecular weight of C7+ (g/mol), YC2+: Mole percent of C2+ composition in injected gas (%), MwC2+: Molecular weight of C2+ in injected gas. Fuzzy Logic and Neural Networks have been used widely in prediction and classification, with relatively high accuracy, in different fields of study. It is well known that the Fuzzy Inference system can handle uncertainty within the inputs such as in our case. The results of this work showed that our proposed models perform better with higher performance indices than other emprical correlations.Keywords: MMP, gas flooding, artificial intelligence, correlation
Procedia PDF Downloads 1442354 Coupling Large Language Models with Disaster Knowledge Graphs for Intelligent Construction
Authors: Zhengrong Wu, Haibo Yang
Abstract:
In the context of escalating global climate change and environmental degradation, the complexity and frequency of natural disasters are continually increasing. Confronted with an abundance of information regarding natural disasters, traditional knowledge graph construction methods, which heavily rely on grammatical rules and prior knowledge, demonstrate suboptimal performance in processing complex, multi-source disaster information. This study, drawing upon past natural disaster reports, disaster-related literature in both English and Chinese, and data from various disaster monitoring stations, constructs question-answer templates based on large language models. Utilizing the P-Tune method, the ChatGLM2-6B model is fine-tuned, leading to the development of a disaster knowledge graph based on large language models. This serves as a knowledge database support for disaster emergency response.Keywords: large language model, knowledge graph, disaster, deep learning
Procedia PDF Downloads 562353 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test
Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston
Abstract:
The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.Keywords: biomarker, diagnostic, neurology, TBI
Procedia PDF Downloads 662352 Core Number Optimization Based Scheduler to Order/Mapp Simulink Application
Authors: Asma Rebaya, Imen Amari, Kaouther Gasmi, Salem Hasnaoui
Abstract:
Over these last years, the number of cores witnessed a spectacular increase in digital signal and general use processors. Concurrently, significant researches are done to get benefit from the high degree of parallelism. Indeed, these researches are focused to provide an efficient scheduling from hardware/software systems to multicores architecture. The scheduling process consists on statically choose one core to execute one task and to specify an execution order for the application tasks. In this paper, we describe an efficient scheduler that calculates the optimal number of cores required to schedule an application, gives a heuristic scheduling solution and evaluates its cost. Our proposal results are evaluated and compared with Preesm scheduler results and we prove that ours allows better scheduling in terms of latency, computation time and number of cores.Keywords: computation time, hardware/software system, latency, optimization, multi-cores platform, scheduling
Procedia PDF Downloads 2832351 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters
Authors: Dylan Santos De Pinho, Nabil Ouerhani
Abstract:
Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization
Procedia PDF Downloads 1472350 The Fibonacci Network: A Simple Alternative for Positional Encoding
Authors: Yair Bleiberg, Michael Werman
Abstract:
Coordinate-based Multi-Layer Perceptrons (MLPs) are known to have difficulty reconstructing high frequencies of the training data. A common solution to this problem is Positional Encoding (PE), which has become quite popular. However, PE has drawbacks. It has high-frequency artifacts and adds another hyper hyperparameter, just like batch normalization and dropout do. We believe that under certain circumstances, PE is not necessary, and a smarter construction of the network architecture together with a smart training method is sufficient to achieve similar results. In this paper, we show that very simple MLPs can quite easily output a frequency when given input of the half-frequency and quarter-frequency. Using this, we design a network architecture in blocks, where the input to each block is the output of the two previous blocks along with the original input. We call this a Fibonacci Network. By training each block on the corresponding frequencies of the signal, we show that Fibonacci Networks can reconstruct arbitrarily high frequencies.Keywords: neural networks, positional encoding, high frequency intepolation, fully connected
Procedia PDF Downloads 982349 Relay Node Selection Algorithm for Cooperative Communications in Wireless Networks
Authors: Sunmyeng Kim
Abstract:
IEEE 802.11a/b/g standards support multiple transmission rates. Even though the use of multiple transmission rates increase the WLAN capacity, this feature leads to the performance anomaly problem. Cooperative communication was introduced to relieve the performance anomaly problem. Data packets are delivered to the destination much faster through a relay node with high rate than through direct transmission to the destination at low rate. In the legacy cooperative protocols, a source node chooses a relay node only based on the transmission rate. Therefore, they are not so feasible in multi-flow environments since they do not consider the effect of other flows. To alleviate the effect, we propose a new relay node selection algorithm based on the transmission rate and channel contention level. Performance evaluation is conducted using simulation, and shows that the proposed protocol significantly outperforms the previous protocol in terms of throughput and delay.Keywords: cooperative communications, MAC protocol, relay node, WLAN
Procedia PDF Downloads 3332348 Comparison of Parallel CUDA and OpenMP Implementations of Memetic Algorithms for Solving Optimization Problems
Authors: Jason Digalakis, John Cotronis
Abstract:
Memetic algorithms (MAs) are useful for solving optimization problems. It is quite difficult to search the search space of the optimization problem with large dimensions. There is a challenge to use all the cores of the system. In this study, a sequential implementation of the memetic algorithm is converted into a concurrent version, which is executed on the cores of both CPU and GPU. For this reason, CUDA and OpenMP libraries are operated on the parallel algorithm to make a concurrent execution on CPU and GPU, respectively. The aim of this study is to compare CPU and GPU implementation of the memetic algorithm. For this purpose, fourteen benchmark functions are selected as test problems. The obtained results indicate that our approach leads to speedups up to five thousand times higher compared to one CPU thread while maintaining a reasonable results quality. This clearly shows that GPUs have the potential to acceleration of MAs and allow them to solve much more complex tasks.Keywords: memetic algorithm, CUDA, GPU-based memetic algorithm, open multi processing, multimodal functions, unimodal functions, non-linear optimization problems
Procedia PDF Downloads 1012347 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section
Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert
Abstract:
Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics
Procedia PDF Downloads 2582346 Planning a Supply Chain with Risk and Environmental Objectives
Authors: Ghanima Al-Sharrah, Haitham M. Lababidi, Yusuf I. Ali
Abstract:
The main objective of the current work is to introduce sustainability factors in optimizing the supply chain model for process industries. The supply chain models are normally based on purely economic considerations related to costs and profits. To account for sustainability, two additional factors have been introduced; environment and risk. A supply chain for an entire petroleum organization has been considered for implementing and testing the proposed optimization models. The environmental and risk factors were introduced as indicators reflecting the anticipated impact of the optimal production scenarios on sustainability. The aggregation method used in extending the single objective function to multi-objective function is proven to be quite effective in balancing the contribution of each objective term. The results indicate that introducing sustainability factor would slightly reduce the economic benefit while improving the environmental and risk reduction performances of the process industries.Keywords: environmental indicators, optimization, risk, supply chain
Procedia PDF Downloads 3512345 Characterization of Himalayan Phyllite with Reference to Foliation Planes
Authors: Divyanshoo Singh, Hemant Kumar Singh, Kumar Nilankar
Abstract:
Major engineering constructions and foundations (e.g., dams, tunnels, bridges, underground caverns, etc.) in and around the Himalayan region of Uttarakhand are not only confined within hard and crystalline rocks but also stretched within weak and anisotropic rocks. While constructing within such anisotropic rocks, engineers more often encounter geotechnical complications such as structural instability, slope failure, and excessive deformation. These severities/complexities arise mainly due to inherent anisotropy such as layering/foliations, preferred mineral orientations, and geo-mechanical anisotropy present within rocks and vary when measured in different directions. Of all the inherent anisotropy present within the rocks, major geotechnical complexities mainly arise due to the inappropriate orientation of weak planes (bedding/foliation). Thus, Orientations of such weak planes highly affect the fracture patterns, failure mechanism, and strength of rocks. This has led to an improved understanding of the physico-mechanical behavior of anisotropic rocks with different orientations of weak planes. Therefore, in this study, block samples of phyllite belonging to the Chandpur Group of Lesser Himalaya were collected from the Srinagar area of Uttarakhand, India, to investigate the effect of foliation angles on physico-mechanical properties of the rock. Further, collected block samples were core drilled of diameter 50 mm at different foliation angles, β (angle between foliation plane and drilling direction), i.e., 0⁰, 30⁰, 60⁰, and 90⁰, respectively. Before the test, drilled core samples were oven-dried at 110⁰C to achieve uniformity. Physical and mechanical properties such as Seismic wave velocity, density, uniaxial compressive strength (UCS), point load strength (PLS), and Brazilian tensile strength (BTS) test were carried out on prepared core specimens. The results indicate that seismic wave velocities (P-wave and S-wave) decrease with increasing β angle. As the β angle increases, the number of foliation planes that the wave needs to pass through increases and thus causes the dissipation of wave energy with increasing β. Maximum strength for UCS, PLS, and BTS was found to be at β angle of 90⁰. However, minimum strength for UCS and BTS was found to be at β angle of 30⁰, which differs from PLS, where minimum strength was found at 0⁰ β angle. Furthermore, failure modes also correspond to the strength of the rock, showing along foliation and non-central failure as characteristics of low strength values, while multiple fractures and central failure as characteristics of high strength values. Thus, this study will provide a better understanding of the anisotropic features of phyllite for the purpose of major engineering construction and foundations within the Himalayan Region.Keywords: anisotropic rocks, foliation angle, Physico-mechanical properties, phyllite, Himalayan region
Procedia PDF Downloads 592344 Production Plan and Technological Variants Optimization by Goal Programming Methods
Authors: Tunjo Perić, Franjo Bratić
Abstract:
In this paper the goal programming methodology for solving multiple objective problem of the technological variants and production plan optimization has been applied. The optimization criteria are determined and the multiple objective linear programming model for solving a problem of the technological variants and production plan optimization is formed and solved. Then the obtained results are analysed. The obtained results point out to the possibility of efficient application of the goal programming methodology in solving the problem of the technological variants and production plan optimization. The paper points out on the advantages of the application of the goal programming methodolohy compare to the Surrogat Worth Trade-off method in solving this problem.Keywords: goal programming, multi objective programming, production plan, SWT method, technological variants
Procedia PDF Downloads 3792343 Utilizing Grid Computing to Enhance Power Systems Performance
Authors: Rafid A. Al-Khannak, Fawzi M. Al-Naima
Abstract:
Power load is one of the most important controlling keys which decide power demands and illustrate power usage to shape power market. Hence, power load forecasting is the parameter which facilitates understanding and analyzing all these aspects. In this paper, power load forecasting is solved under MATLAB environment by constructing a neural network for the power load to find an accurate simulated solution with the minimum error. A developed algorithm to achieve load forecasting application with faster technique is the aim for this paper. The algorithm is used to enable MATLAB power application to be implemented by multi machines in the Grid computing system, and to accomplish it within much less time, cost and with high accuracy and quality. Grid Computing, the modern computational distributing technology, has been used to enhance the performance of power applications by utilizing idle and desired Grid contributor(s) by sharing computational power resources.Keywords: DeskGrid, Grid Server, idle contributor(s), grid computing, load forecasting
Procedia PDF Downloads 4752342 Dual-Polarized Multi-Antenna System for Massive MIMO Cellular Communications
Authors: Naser Ojaroudi Parchin, Haleh Jahanbakhsh Basherlou, Raed A. Abd-Alhameed, Peter S. Excell
Abstract:
In this paper, a multiple-input/multiple-output (MIMO) antenna design with polarization and radiation pattern diversity is presented for future smartphones. The configuration of the design consists of four double-fed circular-ring antenna elements located at different edges of the printed circuit board (PCB) with an FR-4 substrate and overall dimension of 75×150 mm2. The antenna elements are fed by 50-Ohm microstrip-lines and provide polarization and radiation pattern diversity function due to the orthogonal placement of their feed lines. A good impedance bandwidth (S11 ≤ -10 dB) of 3.4-3.8 GHz has been obtained for the smartphone antenna array. However, for S11 ≤ -6 dB, this value is 3.25-3.95 GHz. More than 3 dB realized gain and 80% total efficiency are achieved for the single-element radiator. The presented design not only provides the required radiation coverage but also generates the polarization diversity characteristic.Keywords: cellular communications, multiple-input/multiple-output systems, mobile-phone antenna, polarization diversity
Procedia PDF Downloads 1422341 Particle Dust Layer Density and the Optical Wavelength Absorption Relationship in Photovoltaic Module
Authors: M. Mesrouk, A. Hadj Arab
Abstract:
This work allows highlight the effect of dust on the absorption of the optical spectrum on the photovoltaic module, the effect of the particles dust presence on the photovoltaic modules have been a microscopic scale studied with COMSOL Multi-physic software simulation. In this paper, we have supposed the dust layer as a diffraction network repetitive optical structure characterized by the spacing between particle which represented by 'd' and the simulated structure (air-dust particle-glass). In this study we can observe the relationship between the wavelength and the particle spacing, the simulation shows us that the maximum wavelength transmission value corresponding, λ0 = 400nm, which represent the spacing value between the particles dust, d = 400 nm. In fact, we can observe that while increase dust layer density the wavelength transmission value decrease, there is a relationship between the density and wavelength value which can be absorbed in a dusty photovoltaic panel.Keywords: dust effect, photovoltaic module, spectral absorption, wavelength transmission
Procedia PDF Downloads 4632340 Experimental and Theoretical Study of the Electric and Magnetic Fields Behavior in the Vicinity of High-Voltage Power Lines
Authors: Tourab Wafa, Nemamcha Mohamed, Babouri Abdessalem
Abstract:
This paper consists on an experimental and analytical characterization of the electromagnetic environment in the in the medium surrounding a circuit of two 220 Kv power lines running in parallel. The analysis presented in this paper is divided into two main parts. The first part concerns the experimental study of the behavior of the electric field and magnetic field generated by the selected double-circuit at ground level (0 m). While the second part simulate and calculate the fields profiles generated by the both lines at different levels above the ground, from (0 m) to the level close to the lines conductors (20 m above the ground) using the electrostatic and magneto-static modules of the COMSOL multi-physics software. The implications of the results are discussed and compared with the ICNIRP reference levels for occupational and non occupational exposures.Keywords: HV power lines, low frequency electromagnetic fields, electromagnetic compatibility, inductive and capacitive coupling, standards
Procedia PDF Downloads 4742339 A Fluorescent Polymeric Boron Sensor
Authors: Soner Cubuk, Mirgul Kosif, M. Vezir Kahraman, Ece Kok Yetimoglu
Abstract:
Boron is an essential trace element for the completion of the life circle for organisms. Suitable methods for the determination of boron have been proposed, including acid - base titrimetric, inductively coupled plasma emission spectroscopy flame atomic absorption and spectrophotometric. However, the above methods have some disadvantages such as long analysis times, requirement of corrosive media such as concentrated sulphuric acid and multi-step sample preparation requirements and time-consuming procedures. In this study, a selective and reusable fluorescent sensor for boron based on glycosyloxyethyl methacrylate was prepared by photopolymerization. The response characteristics such as response time, pH, linear range, limit of detection were systematically investigated. The excitation/emission maxima of the membrane were at 378/423 nm, respectively. The approximate response time was measured as 50 sec. In addition, sensor had a very low limit of detection which was 0.3 ppb. The sensor was successfully used for the determination of boron in water samples with satisfactory results.Keywords: boron, fluorescence, photopolymerization, polymeric sensor
Procedia PDF Downloads 2832338 Evaluation of the Incidence of Mycobacterium Tuberculosis Complex Associated with Soil, Hayfeed and Water in Three Agricultural Facilities in Amathole District Municipality in the Eastern Cape Province
Authors: Athini Ntloko
Abstract:
Mycobacterium bovis and other species of Mycobacterium tuberculosis complex (MTBC) can result to a zoonotic infection known as Bovine tuberculosis (bTB). MTBC has members that may contaminate an extensive range of hosts, including wildlife. Diverse wild species are known to cause disease in domestic livestock and are acknowledged as TB reservoirs. It has been a main study worldwide to deliberate on bTB risk factors as a result and some studies focused on particular parts of risk factors such as wildlife and herd management. The significance of the study was to determine the incidence of Mycobacterium tuberculosis complex that is associated with soil, hayfeed and water. Questionnaires were administered to thirty (30) smallholding farm owners in the two villages (kwaMasele and Qungqwala) and three (3) three commercial farms (Fort Hare dairy farm, Middledrift dairy farm and Seven star dairy farm). Detection of M. tuberculosis complex was achieved by Polymerase Chain Reaction using primers for IS6110; whereas a genotypic drug resistance mutation was detected using Genotype MTBDRplus assays. Nine percent (9%) of respondents had more than 40 cows in their herd, while 60% reported between 10 and 20 cows in their herd. Relationship between farm size and vaccination for TB differed from forty one percent (41%) being the highest to the least five percent (5%). The highest number of respondents who knew about relationship between TB cases and cattle location was ninety one percent (91%). Approximately fifty one percent (51%) of respondents had knowledge about wild life access to the farms. Relationship between import of cattle and farm size ranged from nine percent (9%) to thirty five percent (35%). Cattle sickness in relation to farm size differed from forty three (43%) being the highest to the least three percent (3%); while thirty three percent (33%) of respondents had knowledge about health management. Respondents with knowledge about the occurrence of TB infections in farms were forty-eight percent (48%). The frequency of DNA isolation from samples ranged from the highest forty-five percent (45%) from water to the least twenty two percent (22%) from soil. Fort Hare dairy farm had the highest number of positive samples, forty four percent (44%) from water samples; whereas Middledrift dairy farm had the lowest positive from water, seventeen percent (17%). Twelve (22%) out of 55 isolates showed resistance to INH and RIF that is, multi-drug resistance (MDR) and nine percent (9%) were sensitive to either INH or RIF. The mutations at rpoB gene differed from 58% being the highest to the least (23%). Fifty seven percent (57%) of samples showed a S315T1 mutation while only 14% possessed a S531L in the katG gene. The highest inhA mutations were detected in T8A (80 %) and the least was observed in A16G (17%). The results of this study reveal that risk factors for bTB in cattle and dairy farm workers are a serious issue abound in the Eastern Cape of South Africa; with the possibility of widespread dissemination of multidrug resistant determinants in MTBC from the environment.Keywords: hayfeed, isoniazid, multi-drug resistance, mycobacterium tuberculosis complex, polymerase chain reaction, rifampicin, soil, water
Procedia PDF Downloads 3372337 The Investigation of Relationship between Accounting Information and the Value of Companies
Authors: Golamhassan Ghahramani Aghdam, Pedram Bavili Tabrizi
Abstract:
The aim of this research is to investigate the relationship between accounting information and the value of the companies accepted in Tehran Exchange Market. The dependent variable in this research is the value of a company that is measured by price coefficients, and the independent variables are balance sheet information, profit and loss information, cash flow state information, and profit quality characteristics. The profit quality characteristic index is to be related and to be on-time. This research is an application research, and the research population includes all companies that are active in Tehran exchange market. The number of 194 companies was selected by the systematic method as the statistics sample in the period of 2018-2019. The multi-variable linear regression model was used for the hypotheses test. The results show that there is no relationship between accounting information and companies’ value (stock value) that can be due to the lack of efficiency of the investment market and the inability to use the accounting information by investment market activists.Keywords: accounting information, company value, profit quality characteristics, price coefficient
Procedia PDF Downloads 1392336 Multi-Scale Urban Spatial Evolution Analysis Based on Space Syntax: A Case Study in Modern Yangzhou, China
Authors: Dai Zhimei, Hua Chen
Abstract:
The exploration of urban spatial evolution is an important part of urban development research. Therefore, the evolutionary modern Yangzhou urban spatial texture was taken as the research object, and Spatial Syntax was used as the main research tool, this paper explored Yangzhou spatial evolution law and its driving factors from the urban street network scale, district scale and street scale. The study has concluded that at the urban scale, Yangzhou urban spatial evolution is the result of a variety of causes, including physical and geographical condition, policy and planning factors, and traffic conditions, and the evolution of space also has an impact on social, economic, environmental and cultural factors. At the district and street scales, changes in space will have a profound influence on the history of the city and the activities of people. At the end of the article, the matters needing attention during the evolution of urban space were summarized.Keywords: block, space syntax and methodology, street, urban space, Yangzhou
Procedia PDF Downloads 1812335 Spectral Responses of the Laser Generated Coal Aerosol
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki
Abstract:
Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation
Procedia PDF Downloads 3612334 Quantitative Comparisons of Different Approaches for Rotor Identification
Authors: Elizabeth M. Annoni, Elena G. Tolkacheva
Abstract:
Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that is a known prognostic marker for stroke, heart failure and death. Reentrant mechanisms of rotor formation, which are stable electrical sources of cardiac excitation, are believed to cause AF. No existing commercial mapping systems have been demonstrated to consistently and accurately predict rotor locations outside of the pulmonary veins in patients with persistent AF. There is a clear need for robust spatio-temporal techniques that can consistently identify rotors using unique characteristics of the electrical recordings at the pivot point that can be applied to clinical intracardiac mapping. Recently, we have developed four new signal analysis approaches – Shannon entropy (SE), Kurtosis (Kt), multi-scale frequency (MSF), and multi-scale entropy (MSE) – to identify the pivot points of rotors. These proposed techniques utilize different cardiac signal characteristics (other than local activation) to uncover the intrinsic complexity of the electrical activity in the rotors, which are not taken into account in current mapping methods. We validated these techniques using high-resolution optical mapping experiments in which direct visualization and identification of rotors in ex-vivo Langendorff-perfused hearts were possible. Episodes of ventricular tachycardia (VT) were induced using burst pacing, and two examples of rotors were used showing 3-sec episodes of a single stationary rotor and figure-8 reentry with one rotor being stationary and one meandering. Movies were captured at a rate of 600 frames per second for 3 sec. with 64x64 pixel resolution. These optical mapping movies were used to evaluate the performance and robustness of SE, Kt, MSF and MSE techniques with respect to the following clinical limitations: different time of recordings, different spatial resolution, and the presence of meandering rotors. To quantitatively compare the results, SE, Kt, MSF and MSE techniques were compared to the “true” rotor(s) identified using the phase map. Accuracy was calculated for each approach as the duration of the time series and spatial resolution were reduced. The time series duration was decreased from its original length of 3 sec, down to 2, 1, and 0.5 sec. The spatial resolution of the original VT episodes was decreased from 64x64 pixels to 32x32, 16x16, and 8x8 pixels by uniformly removing pixels from the optical mapping video.. Our results demonstrate that Kt, MSF and MSE were able to accurately identify the pivot point of the rotor under all three clinical limitations. The MSE approach demonstrated the best overall performance, but Kt was the best in identifying the pivot point of the meandering rotor. Artifacts mildly affect the performance of Kt, MSF and MSE techniques, but had a strong negative impact of the performance of SE. The results of our study motivate further validation of SE, Kt, MSF and MSE techniques using intra-atrial electrograms from paroxysmal and persistent AF patients to see if these approaches can identify pivot points in a clinical setting. More accurate rotor localization could significantly increase the efficacy of catheter ablation to treat AF, resulting in a higher success rate for single procedures.Keywords: Atrial Fibrillation, Optical Mapping, Signal Processing, Rotors
Procedia PDF Downloads 324