Search results for: multi and inter-disciplinary
2396 Applying Art Integration on Teaching Quality Assurance for Early Childhood Art Education
Authors: Shih Meng-Chi, Nai-Chia Chao
Abstract:
The study constructed an arts integrative curriculum for early childhood educators and kindergarten teachers to the exciting possibilities of the use of the art integration method. The art integrative curriculum applied art integration that combines and integrates various elements of music, observation, sound, art, instruments, and creation. The program consists of college courses that combine the use of technology with children’s literature, multimedia, music, dance, and drama presentation. This educational program is being used in kindergartens during the pre-service kindergarten teacher training. The study found that arts integrated curriculum was benefit for connecting across domains, multi-sensory experiences, teaching skills, implementation and creation on children art education. The art Integrating instruction helped to provide students with an understanding of the whole framework and improve the teaching quality.Keywords: art integration, teaching quality assurance, early childhood education, arts integrated curriculum
Procedia PDF Downloads 5952395 An Agent-Based Approach to Examine Interactions of Firms for Investment Revival
Authors: Ichiro Takahashi
Abstract:
One conundrum that macroeconomic theory faces is to explain how an economy can revive from depression, in which the aggregate demand has fallen substantially below its productive capacity. This paper examines an autonomous stabilizing mechanism using an agent-based Wicksell-Keynes macroeconomic model. This paper focuses on the effects of the number of firms and the length of the gestation period for investment that are often assumed to be one in a mainstream macroeconomic model. The simulations found the virtual economy was highly unstable, or more precisely, collapsing when these parameters are fixed at one. This finding may even suggest us to question the legitimacy of these common assumptions. A perpetual decline in capital stock will eventually encourage investment if the capital stock is short-lived because an inactive investment will result in insufficient productive capacity. However, for an economy characterized by a roundabout production method, a gradual decline in productive capacity may not be able to fall below the aggregate demand that is also shrinking. Naturally, one would then ask if our economy cannot rely on an external stimulus such as population growth and technological progress to revive investment, what factors would provide such a buoyancy for stimulating investments? The current paper attempts to answer this question by employing the artificial macroeconomic model mentioned above. The baseline model has the following three features: (1) the multi-period gestation for investment, (2) a large number of heterogeneous firms, (3) demand-constrained firms. The instability is a consequence of the following dynamic interactions. (a) A multiple-period gestation period means that once a firm starts a new investment, it continues to invest over some subsequent periods. During these gestation periods, the excess demand created by the investing firm will spill over to ignite new investment of other firms that are supplying investment goods: the presence of multi-period gestation for investment provides a field for investment interactions. Conversely, the excess demand for investment goods tends to fade away before it develops into a full-fledged boom if the gestation period of investment is short. (b) A strong demand in the goods market tends to raise the price level, thereby lowering real wages. This reduction of real wages creates two opposing effects on the aggregate demand through the following two channels: (1) a reduction in the real labor income, and (2) an increase in the labor demand due to the principle of equality between the marginal labor productivity and real wage (referred as the Walrasian labor demand). If there is only a single firm, a lower real wage will increase its Walrasian labor demand, thereby an actual labor demand tends to be determined by the derived labor demand. Thus, the second positive effect would not work effectively. In contrast, for an economy with a large number of firms, Walrasian firms will increase employment. This interaction among heterogeneous firms is a key for stability. A single firm cannot expect the benefit of such an increased aggregate demand from other firms.Keywords: agent-based macroeconomic model, business cycle, demand constraint, gestation period, representative agent model, stability
Procedia PDF Downloads 1622394 Gathering Space after Disaster: Understanding the Communicative and Collective Dimensions of Resilience through Field Research across Time in Hurricane Impacted Regions of the United States
Authors: Jack L. Harris, Marya L. Doerfel, Hyunsook Youn, Minkyung Kim, Kautuki Sunil Jariwala
Abstract:
Organizational resilience refers to the ability to sustain business or general work functioning despite wide-scale interruptions. We focus on organization and businesses as a pillar of their communities and how they attempt to sustain work when a natural disaster impacts their surrounding regions and economies. While it may be more common to think of resilience as a trait possessed by an organization, an emerging area of research recognizes that for organizations and businesses, resilience is a set of processes that are constituted through communication, social networks, and organizing. Indeed, five processes, robustness, rapidity, resourcefulness, redundancy, and external availability through social media have been identified as critical to organizational resilience. These organizing mechanisms involve multi-level coordination, where individuals intersect with groups, organizations, and communities. Because the nature of such interactions are often networks of people and organizations coordinating material resources, information, and support, they necessarily require some way to coordinate despite being displaced. Little is known, however, if physical and digital spaces can substitute one for the other. We thus are guided by the question, is digital space sufficient when disaster creates a scarcity of physical space? This study presents a cross-case comparison based on field research from four different regions of the United States that were impacted by Hurricanes Katrina (2005), Sandy (2012), Maria (2017), and Harvey (2017). These four cases are used to extend the science of resilience by examining multi-level processes enacted by individuals, communities, and organizations that together, contribute to the resilience of disaster-struck organizations, businesses, and their communities. Using field research about organizations and businesses impacted by the four hurricanes, we code data from interviews, participant observations, field notes, and document analysis drawn from New Orleans (post-Katrina), coastal New Jersey (post-Sandy), Houston Texas (post-Harvey), and the lower keys of Florida (post-Maria). This paper identifies an additional organizing mechanism, networked gathering spaces, where citizens and organizations, alike, coordinate and facilitate information sharing, material resource distribution, and social support. Findings show that digital space, alone, is not a sufficient substitute to effectively sustain organizational resilience during a disaster. Because the data are qualitative, we expand on this finding with specific ways in which organizations and the people who lead them worked around the problem of scarce space. We propose that gatherings after disaster are a sixth mechanism that contributes to organizational resilience.Keywords: communication, coordination, disaster management, information and communication technologies, interorganizational relationships, resilience, work
Procedia PDF Downloads 1712393 Adaptive Dehazing Using Fusion Strategy
Authors: M. Ramesh Kanthan, S. Naga Nandini Sujatha
Abstract:
The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples.Keywords: single image, fusion, dehazing, multi-scale fusion, per-pixel, weight map
Procedia PDF Downloads 4642392 Examining Kokugaku as a Pattern of Defining Identity in Global Comparison
Authors: Mária Ildikó Farkas
Abstract:
Kokugaku of the Edo period can be seen as a key factor of defining cultural (and national) identity in the 18th and early 19th century based on Japanese cultural heritage. Kokugaku focused on Japanese classics, on exploring, studying and reviving (or even inventing) ancient Japanese language, literature, myths, history and also political ideology. ‘Japanese culture’ as such was distinguished from Chinese (and all other) cultures, ‘Japanese identity’ was thus defined. Meiji scholars used kokugaku conceptions of Japan to construct a modern national identity based on the premodern and culturalist conceptions of community. The Japanese cultural movement of the 18-19th centuries (kokugaku) of defining cultural and national identity before modernization can be compared not to the development of Western Europe (where national identity strongly attached to modern nation states) or other parts of Asia (where these emerged after the Western colonization), but rather with the ‘national awakening’ movements of the peoples of East Central Europe, a comparison which have not been dealt with in the secondary literature yet. The role of a common language, culture, history and myths in the process of defining cultural identity – following mainly Miroslav Hroch’s comparative and interdisciplinary theory of national development – can be examined compared to the movements of defining identity of the peoples of East Central Europe (18th-19th c). In the shadow of a cultural and/or political ‘monolith’ (China for Japan and Germany for Central Europe), before modernity, ethnic groups or communities started to evolve their own identities with cultural movements focusing on their own language and culture, thus creating their cultural identity, and in the end, a new sense of community, the nation. Comparing actual texts (‘narratives’) of the kokugaku scholars and Central European writers of the nation building period (18th and early 19th centuries) can reveal the similarities of the discourses of deliberate searches for identity. Similar motives of argument can be identified in these narratives: ‘language’ as the primary bearer of collective identity, the role of language in culture, ‘culture’ as the main common attribute of the community; and similar aspirations to explore, search and develop native language, ‘genuine’ culture, ‘original’ traditions. This comparative research offering ‘development patterns’ for interpretation can help us understand processes that may be ambiguously considered ‘backward’ or even ‘deleterious’ (e.g. cultural nationalism) or just ‘unique’. ‘Cultural identity’ played a very important role in the formation of national identity during modernization especially in the case of non-Western communities, who had to face the danger of losing their identities in the course of ‘Westernization’ accompanying modernization.Keywords: cultural identity, Japanese modernization, kokugaku, national awakening
Procedia PDF Downloads 2712391 How to Perform Proper Indexing?
Authors: Watheq Mansour, Waleed Bin Owais, Mohammad Basheer Kotit, Khaled Khan
Abstract:
Efficient query processing is one of the utmost requisites in any business environment to satisfy consumer needs. This paper investigates the various types of indexing models, viz. primary, secondary, and multi-level. The investigation is done under the ambit of various types of queries to which each indexing model performs with efficacy. This study also discusses the inherent advantages and disadvantages of each indexing model and how indexing models can be chosen based on a particular environment. This paper also draws parallels between various indexing models and provides recommendations that would help a Database administrator to zero-in on a particular indexing model attributed to the needs and requirements of the production environment. In addition, to satisfy industry and consumer needs attributed to the colossal data generation nowadays, this study has proposed two novel indexing techniques that can be used to index highly unstructured and structured Big Data with efficacy. The study also briefly discusses some best practices that the industry should follow in order to choose an indexing model that is apposite to their prerequisites and requirements.Keywords: indexing, hashing, latent semantic indexing, B-tree
Procedia PDF Downloads 1562390 Sum Capacity with Regularized Channel Inversion in Multi-Antenna Downlink Systems under Equal Power Constraint
Authors: Attaullah Khawaja, Amna Shabbir
Abstract:
Channel inversion is one of the simplest techniques for multiuser downlink systems with single-antenna users. In this paper regularized channel inversion under equal power constraint in the multiuser multiple input multiple output (MU-MIMO) broadcast channels has been considered. Sum capacity with plain channel inversion also known as Zero Forcing Beam Forming (ZFBF) and optimum sum capacity using Dirty Paper Coding (DPC) has also been investigated. Analysis and simulations show that regularization enhances the system performance and empower linear growth in Sum Capacity and specially work well at low signal to noise ratio (SNRs) regime.Keywords: broadcast channel, channel inversion, multiple antenna multiple-user wireless, multiple-input multiple-output (MIMO), regularization, dirty paper coding (DPC), sum capacity
Procedia PDF Downloads 5272389 The Multi-Lingual Acquisition Patterns of Elementary, High School and College Students in Angeles City, Philippines
Authors: Dennis Infante, Leonora Yambao
Abstract:
The Philippines is a multilingual community. A Filipino learns at least three languages throughout his lifespan. Since languages are learned and picked up simultaneously in the environment, a student naturally develops a language system that combines features of at least three languages: the local language, English and Filipino. This study seeks to investigate this particular phenomenon and aspires to propose a theoretical framework of unique language acquisition in the elementary, high school and college in the three languages spoken and used in media, community, business and school: Kapampangan, the local language; Filipino, the national language; and English. The study randomly selects five students from three participating schools in order to acquire language samples. The samples were analyzed in the subsentential, sentential and suprasentential levels using grammatical theories. The data are classified to map out the pattern of substitution or shifting from one language to another.Keywords: language acquisition, mother tongue, multiculturalism, multilingual education
Procedia PDF Downloads 3802388 Artificial Intelligence Methods in Estimating the Minimum Miscibility Pressure Required for Gas Flooding
Authors: Emad A. Mohammed
Abstract:
Utilizing the capabilities of Data Mining and Artificial Intelligence in the prediction of the minimum miscibility pressure (MMP) required for multi-contact miscible (MCM) displacement of reservoir petroleum by hydrocarbon gas flooding using Fuzzy Logic models and Artificial Neural Network models will help a lot in giving accurate results. The factors affecting the (MMP) as it is proved from the literature and from the dataset are as follows: XC2-6: Intermediate composition in the oil-containing C2-6, CO2 and H2S, in mole %, XC1: Amount of methane in the oil (%),T: Temperature (°C), MwC7+: Molecular weight of C7+ (g/mol), YC2+: Mole percent of C2+ composition in injected gas (%), MwC2+: Molecular weight of C2+ in injected gas. Fuzzy Logic and Neural Networks have been used widely in prediction and classification, with relatively high accuracy, in different fields of study. It is well known that the Fuzzy Inference system can handle uncertainty within the inputs such as in our case. The results of this work showed that our proposed models perform better with higher performance indices than other emprical correlations.Keywords: MMP, gas flooding, artificial intelligence, correlation
Procedia PDF Downloads 1442387 Coupling Large Language Models with Disaster Knowledge Graphs for Intelligent Construction
Authors: Zhengrong Wu, Haibo Yang
Abstract:
In the context of escalating global climate change and environmental degradation, the complexity and frequency of natural disasters are continually increasing. Confronted with an abundance of information regarding natural disasters, traditional knowledge graph construction methods, which heavily rely on grammatical rules and prior knowledge, demonstrate suboptimal performance in processing complex, multi-source disaster information. This study, drawing upon past natural disaster reports, disaster-related literature in both English and Chinese, and data from various disaster monitoring stations, constructs question-answer templates based on large language models. Utilizing the P-Tune method, the ChatGLM2-6B model is fine-tuned, leading to the development of a disaster knowledge graph based on large language models. This serves as a knowledge database support for disaster emergency response.Keywords: large language model, knowledge graph, disaster, deep learning
Procedia PDF Downloads 562386 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test
Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston
Abstract:
The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.Keywords: biomarker, diagnostic, neurology, TBI
Procedia PDF Downloads 662385 Core Number Optimization Based Scheduler to Order/Mapp Simulink Application
Authors: Asma Rebaya, Imen Amari, Kaouther Gasmi, Salem Hasnaoui
Abstract:
Over these last years, the number of cores witnessed a spectacular increase in digital signal and general use processors. Concurrently, significant researches are done to get benefit from the high degree of parallelism. Indeed, these researches are focused to provide an efficient scheduling from hardware/software systems to multicores architecture. The scheduling process consists on statically choose one core to execute one task and to specify an execution order for the application tasks. In this paper, we describe an efficient scheduler that calculates the optimal number of cores required to schedule an application, gives a heuristic scheduling solution and evaluates its cost. Our proposal results are evaluated and compared with Preesm scheduler results and we prove that ours allows better scheduling in terms of latency, computation time and number of cores.Keywords: computation time, hardware/software system, latency, optimization, multi-cores platform, scheduling
Procedia PDF Downloads 2832384 Reconceptualising the Voice of Children in Child Protection
Authors: Sharon Jackson, Lynn Kelly
Abstract:
This paper proposes a conceptual review of the interdisciplinary literature which has theorised the concept of ‘children’s voices’. The primary aim is to identify and consider the theoretical relevance of conceptual thought on ‘children’s voices’ for research and practice in child protection contexts. Attending to the ‘voice of the child’ has become a core principle of social work practice in contemporary child protection contexts. Discourses of voice permeate the legislative, policy and practice frameworks of child protection practices within the UK and internationally. Voice is positioned within a ‘child-centred’ moral imperative to ‘hear the voices’ of children and take their preferences and perspectives into account. This practice is now considered to be central to working in a child-centered way. The genesis of this call to voice is revealed through sociological analysis of twentieth-century child welfare reform as rooted inter alia in intersecting political, social and cultural discourses which have situated children and childhood as cites of state intervention as enshrined in the 1989 United Nations Convention on the Rights of the Child ratified by the UK government in 1991 and more specifically Article 12 of the convention. From a policy and practice perspective, the professional ‘capturing’ of children’s voices has come to saturate child protection practice. This has incited a stream of directives, resources, advisory publications and ‘how-to’ guides which attempt to articulate practice methods to ‘listen’, ‘hear’ and above all – ‘capture’ the ‘voice of the child’. The idiom ‘capturing the voice of the child’ is frequently invoked within the literature to express the requirements of the child-centered practice task to be accomplished. Despite the centrality of voice, and an obsession with ‘capturing’ voices, evidence from research, inspection processes, serious case reviews, child abuse and death inquires has consistently highlighted professional neglect of ‘the voice of the child’. Notable research studies have highlighted the relative absence of the child’s voice in social work assessment practices, a troubling lack of meaningful engagement with children and the need to more thoroughly examine communicative practices in child protection contexts. As a consequence, the project of capturing ‘the voice of the child’ has intensified, and there has been an increasing focus on developing methods and professional skills to attend to voice. This has been guided by a recognition that professionals often lack the skills and training to engage with children in age-appropriate ways. We argue however that the problem with ‘capturing’ and [re]representing ‘voice’ in child protection contexts is, more fundamentally, a failure to adequately theorise the concept of ‘voice’ in the ‘voice of the child’. For the most part, ‘The voice of the child’ incorporates psychological conceptions of child development. While these concepts are useful in the context of direct work with children, they fail to consider other strands of sociological thought, which position ‘the voice of the child’ within an agentic paradigm to emphasise the active agency of the child.Keywords: child-centered, child protection, views of the child, voice of the child
Procedia PDF Downloads 1362383 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters
Authors: Dylan Santos De Pinho, Nabil Ouerhani
Abstract:
Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization
Procedia PDF Downloads 1472382 The Fibonacci Network: A Simple Alternative for Positional Encoding
Authors: Yair Bleiberg, Michael Werman
Abstract:
Coordinate-based Multi-Layer Perceptrons (MLPs) are known to have difficulty reconstructing high frequencies of the training data. A common solution to this problem is Positional Encoding (PE), which has become quite popular. However, PE has drawbacks. It has high-frequency artifacts and adds another hyper hyperparameter, just like batch normalization and dropout do. We believe that under certain circumstances, PE is not necessary, and a smarter construction of the network architecture together with a smart training method is sufficient to achieve similar results. In this paper, we show that very simple MLPs can quite easily output a frequency when given input of the half-frequency and quarter-frequency. Using this, we design a network architecture in blocks, where the input to each block is the output of the two previous blocks along with the original input. We call this a Fibonacci Network. By training each block on the corresponding frequencies of the signal, we show that Fibonacci Networks can reconstruct arbitrarily high frequencies.Keywords: neural networks, positional encoding, high frequency intepolation, fully connected
Procedia PDF Downloads 982381 Relay Node Selection Algorithm for Cooperative Communications in Wireless Networks
Authors: Sunmyeng Kim
Abstract:
IEEE 802.11a/b/g standards support multiple transmission rates. Even though the use of multiple transmission rates increase the WLAN capacity, this feature leads to the performance anomaly problem. Cooperative communication was introduced to relieve the performance anomaly problem. Data packets are delivered to the destination much faster through a relay node with high rate than through direct transmission to the destination at low rate. In the legacy cooperative protocols, a source node chooses a relay node only based on the transmission rate. Therefore, they are not so feasible in multi-flow environments since they do not consider the effect of other flows. To alleviate the effect, we propose a new relay node selection algorithm based on the transmission rate and channel contention level. Performance evaluation is conducted using simulation, and shows that the proposed protocol significantly outperforms the previous protocol in terms of throughput and delay.Keywords: cooperative communications, MAC protocol, relay node, WLAN
Procedia PDF Downloads 3322380 Comparison of Parallel CUDA and OpenMP Implementations of Memetic Algorithms for Solving Optimization Problems
Authors: Jason Digalakis, John Cotronis
Abstract:
Memetic algorithms (MAs) are useful for solving optimization problems. It is quite difficult to search the search space of the optimization problem with large dimensions. There is a challenge to use all the cores of the system. In this study, a sequential implementation of the memetic algorithm is converted into a concurrent version, which is executed on the cores of both CPU and GPU. For this reason, CUDA and OpenMP libraries are operated on the parallel algorithm to make a concurrent execution on CPU and GPU, respectively. The aim of this study is to compare CPU and GPU implementation of the memetic algorithm. For this purpose, fourteen benchmark functions are selected as test problems. The obtained results indicate that our approach leads to speedups up to five thousand times higher compared to one CPU thread while maintaining a reasonable results quality. This clearly shows that GPUs have the potential to acceleration of MAs and allow them to solve much more complex tasks.Keywords: memetic algorithm, CUDA, GPU-based memetic algorithm, open multi processing, multimodal functions, unimodal functions, non-linear optimization problems
Procedia PDF Downloads 1012379 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section
Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert
Abstract:
Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics
Procedia PDF Downloads 2582378 Planning a Supply Chain with Risk and Environmental Objectives
Authors: Ghanima Al-Sharrah, Haitham M. Lababidi, Yusuf I. Ali
Abstract:
The main objective of the current work is to introduce sustainability factors in optimizing the supply chain model for process industries. The supply chain models are normally based on purely economic considerations related to costs and profits. To account for sustainability, two additional factors have been introduced; environment and risk. A supply chain for an entire petroleum organization has been considered for implementing and testing the proposed optimization models. The environmental and risk factors were introduced as indicators reflecting the anticipated impact of the optimal production scenarios on sustainability. The aggregation method used in extending the single objective function to multi-objective function is proven to be quite effective in balancing the contribution of each objective term. The results indicate that introducing sustainability factor would slightly reduce the economic benefit while improving the environmental and risk reduction performances of the process industries.Keywords: environmental indicators, optimization, risk, supply chain
Procedia PDF Downloads 3512377 Production Plan and Technological Variants Optimization by Goal Programming Methods
Authors: Tunjo Perić, Franjo Bratić
Abstract:
In this paper the goal programming methodology for solving multiple objective problem of the technological variants and production plan optimization has been applied. The optimization criteria are determined and the multiple objective linear programming model for solving a problem of the technological variants and production plan optimization is formed and solved. Then the obtained results are analysed. The obtained results point out to the possibility of efficient application of the goal programming methodology in solving the problem of the technological variants and production plan optimization. The paper points out on the advantages of the application of the goal programming methodolohy compare to the Surrogat Worth Trade-off method in solving this problem.Keywords: goal programming, multi objective programming, production plan, SWT method, technological variants
Procedia PDF Downloads 3792376 Utilizing Grid Computing to Enhance Power Systems Performance
Authors: Rafid A. Al-Khannak, Fawzi M. Al-Naima
Abstract:
Power load is one of the most important controlling keys which decide power demands and illustrate power usage to shape power market. Hence, power load forecasting is the parameter which facilitates understanding and analyzing all these aspects. In this paper, power load forecasting is solved under MATLAB environment by constructing a neural network for the power load to find an accurate simulated solution with the minimum error. A developed algorithm to achieve load forecasting application with faster technique is the aim for this paper. The algorithm is used to enable MATLAB power application to be implemented by multi machines in the Grid computing system, and to accomplish it within much less time, cost and with high accuracy and quality. Grid Computing, the modern computational distributing technology, has been used to enhance the performance of power applications by utilizing idle and desired Grid contributor(s) by sharing computational power resources.Keywords: DeskGrid, Grid Server, idle contributor(s), grid computing, load forecasting
Procedia PDF Downloads 4752375 Dual-Polarized Multi-Antenna System for Massive MIMO Cellular Communications
Authors: Naser Ojaroudi Parchin, Haleh Jahanbakhsh Basherlou, Raed A. Abd-Alhameed, Peter S. Excell
Abstract:
In this paper, a multiple-input/multiple-output (MIMO) antenna design with polarization and radiation pattern diversity is presented for future smartphones. The configuration of the design consists of four double-fed circular-ring antenna elements located at different edges of the printed circuit board (PCB) with an FR-4 substrate and overall dimension of 75×150 mm2. The antenna elements are fed by 50-Ohm microstrip-lines and provide polarization and radiation pattern diversity function due to the orthogonal placement of their feed lines. A good impedance bandwidth (S11 ≤ -10 dB) of 3.4-3.8 GHz has been obtained for the smartphone antenna array. However, for S11 ≤ -6 dB, this value is 3.25-3.95 GHz. More than 3 dB realized gain and 80% total efficiency are achieved for the single-element radiator. The presented design not only provides the required radiation coverage but also generates the polarization diversity characteristic.Keywords: cellular communications, multiple-input/multiple-output systems, mobile-phone antenna, polarization diversity
Procedia PDF Downloads 1422374 'Naming, Blaming, Shaming': Sexual Assault Survivors' Perceptions of the Practice of Shaming
Authors: Anat Peleg, Hadar Dancig-Rosenberg
Abstract:
This interdisciplinary study, to our knowledge the first in this field, is located on the intersection of victimology-law and society-and media literature, and it corresponds both with feminist writing and with cyber literature which explores the techno-social sphere. It depicts the multifaceted dimensions of shaming in the eyes of the survivors through the following research questions: What are the motivations of sexual-assault survivors to publicize the assailants' identity or to refrain from this practice? Is shaming on Facebook perceived by sexual–assault victims as a substitute for the CJS or as a new form of social activism? What positive and negative consequences do survivors experience as a result of shaming their assailants online? The study draws on in-depth semi-structured interviews which we have conducted between 2016-2018 with 20 sexual-assaults survivors who exposed themselves on Facebook. They were sexually attacked in various forms: six participants reported that they had been raped when they were minors; eight women reported that they had been raped as adults; three reported that they had been victims of an indecent act and three reported that they had been harassed either in their workplace or in the public sphere. Most of our interviewees (12) reported to the police and were involved in criminal procedures. More than half of the survivors (11) disclosed the identity of their attackers online. The vocabularies of motives that have emerged from the thematic analysis of the interviews with the survivors consist of both social and personal motivations for using the practice of shaming online. Some survivors maintain that the use of shaming derives from the decline in the public trust in the criminal justice system. It reflects demand for accountability and justice and serves also as a practice of warning other potential victims of the assailants. Other survivors assert that shaming people in a position of privilege is meant to fulfill the public right to know who these privileged men really are. However, these aforementioned moral and practical justifications of the practice of shaming are often mitigated by fear from the attackers' physical or legal actions in response to their allegations. Some interviewees who are feminist activists argue that the practice of shaming perpetuates the social ancient tendency to define women by labels linking them to the men who attacked them, instead of being defined by their own life complexities. The variety of motivations to adopt or resent the practice of shaming by sexual assault victims presented in our study appear to refute the prevailing intuitive stereotype that shaming is an irrational act of revenge, and denote its rationality. The role of social media as an arena for seeking informal justice raises questions about the new power relations created between victims, assailants, the community and the State, outside the formal criminal justice system. At the same time, the survivors' narratives also uncover the risks and pitfalls embedded within the online sphere for sexual assault survivors.Keywords: criminal justice, gender, Facebook, sexual-assaults
Procedia PDF Downloads 1122373 Particle Dust Layer Density and the Optical Wavelength Absorption Relationship in Photovoltaic Module
Authors: M. Mesrouk, A. Hadj Arab
Abstract:
This work allows highlight the effect of dust on the absorption of the optical spectrum on the photovoltaic module, the effect of the particles dust presence on the photovoltaic modules have been a microscopic scale studied with COMSOL Multi-physic software simulation. In this paper, we have supposed the dust layer as a diffraction network repetitive optical structure characterized by the spacing between particle which represented by 'd' and the simulated structure (air-dust particle-glass). In this study we can observe the relationship between the wavelength and the particle spacing, the simulation shows us that the maximum wavelength transmission value corresponding, λ0 = 400nm, which represent the spacing value between the particles dust, d = 400 nm. In fact, we can observe that while increase dust layer density the wavelength transmission value decrease, there is a relationship between the density and wavelength value which can be absorbed in a dusty photovoltaic panel.Keywords: dust effect, photovoltaic module, spectral absorption, wavelength transmission
Procedia PDF Downloads 4632372 Experimental and Theoretical Study of the Electric and Magnetic Fields Behavior in the Vicinity of High-Voltage Power Lines
Authors: Tourab Wafa, Nemamcha Mohamed, Babouri Abdessalem
Abstract:
This paper consists on an experimental and analytical characterization of the electromagnetic environment in the in the medium surrounding a circuit of two 220 Kv power lines running in parallel. The analysis presented in this paper is divided into two main parts. The first part concerns the experimental study of the behavior of the electric field and magnetic field generated by the selected double-circuit at ground level (0 m). While the second part simulate and calculate the fields profiles generated by the both lines at different levels above the ground, from (0 m) to the level close to the lines conductors (20 m above the ground) using the electrostatic and magneto-static modules of the COMSOL multi-physics software. The implications of the results are discussed and compared with the ICNIRP reference levels for occupational and non occupational exposures.Keywords: HV power lines, low frequency electromagnetic fields, electromagnetic compatibility, inductive and capacitive coupling, standards
Procedia PDF Downloads 4742371 A Fluorescent Polymeric Boron Sensor
Authors: Soner Cubuk, Mirgul Kosif, M. Vezir Kahraman, Ece Kok Yetimoglu
Abstract:
Boron is an essential trace element for the completion of the life circle for organisms. Suitable methods for the determination of boron have been proposed, including acid - base titrimetric, inductively coupled plasma emission spectroscopy flame atomic absorption and spectrophotometric. However, the above methods have some disadvantages such as long analysis times, requirement of corrosive media such as concentrated sulphuric acid and multi-step sample preparation requirements and time-consuming procedures. In this study, a selective and reusable fluorescent sensor for boron based on glycosyloxyethyl methacrylate was prepared by photopolymerization. The response characteristics such as response time, pH, linear range, limit of detection were systematically investigated. The excitation/emission maxima of the membrane were at 378/423 nm, respectively. The approximate response time was measured as 50 sec. In addition, sensor had a very low limit of detection which was 0.3 ppb. The sensor was successfully used for the determination of boron in water samples with satisfactory results.Keywords: boron, fluorescence, photopolymerization, polymeric sensor
Procedia PDF Downloads 2832370 Evaluation of the Incidence of Mycobacterium Tuberculosis Complex Associated with Soil, Hayfeed and Water in Three Agricultural Facilities in Amathole District Municipality in the Eastern Cape Province
Authors: Athini Ntloko
Abstract:
Mycobacterium bovis and other species of Mycobacterium tuberculosis complex (MTBC) can result to a zoonotic infection known as Bovine tuberculosis (bTB). MTBC has members that may contaminate an extensive range of hosts, including wildlife. Diverse wild species are known to cause disease in domestic livestock and are acknowledged as TB reservoirs. It has been a main study worldwide to deliberate on bTB risk factors as a result and some studies focused on particular parts of risk factors such as wildlife and herd management. The significance of the study was to determine the incidence of Mycobacterium tuberculosis complex that is associated with soil, hayfeed and water. Questionnaires were administered to thirty (30) smallholding farm owners in the two villages (kwaMasele and Qungqwala) and three (3) three commercial farms (Fort Hare dairy farm, Middledrift dairy farm and Seven star dairy farm). Detection of M. tuberculosis complex was achieved by Polymerase Chain Reaction using primers for IS6110; whereas a genotypic drug resistance mutation was detected using Genotype MTBDRplus assays. Nine percent (9%) of respondents had more than 40 cows in their herd, while 60% reported between 10 and 20 cows in their herd. Relationship between farm size and vaccination for TB differed from forty one percent (41%) being the highest to the least five percent (5%). The highest number of respondents who knew about relationship between TB cases and cattle location was ninety one percent (91%). Approximately fifty one percent (51%) of respondents had knowledge about wild life access to the farms. Relationship between import of cattle and farm size ranged from nine percent (9%) to thirty five percent (35%). Cattle sickness in relation to farm size differed from forty three (43%) being the highest to the least three percent (3%); while thirty three percent (33%) of respondents had knowledge about health management. Respondents with knowledge about the occurrence of TB infections in farms were forty-eight percent (48%). The frequency of DNA isolation from samples ranged from the highest forty-five percent (45%) from water to the least twenty two percent (22%) from soil. Fort Hare dairy farm had the highest number of positive samples, forty four percent (44%) from water samples; whereas Middledrift dairy farm had the lowest positive from water, seventeen percent (17%). Twelve (22%) out of 55 isolates showed resistance to INH and RIF that is, multi-drug resistance (MDR) and nine percent (9%) were sensitive to either INH or RIF. The mutations at rpoB gene differed from 58% being the highest to the least (23%). Fifty seven percent (57%) of samples showed a S315T1 mutation while only 14% possessed a S531L in the katG gene. The highest inhA mutations were detected in T8A (80 %) and the least was observed in A16G (17%). The results of this study reveal that risk factors for bTB in cattle and dairy farm workers are a serious issue abound in the Eastern Cape of South Africa; with the possibility of widespread dissemination of multidrug resistant determinants in MTBC from the environment.Keywords: hayfeed, isoniazid, multi-drug resistance, mycobacterium tuberculosis complex, polymerase chain reaction, rifampicin, soil, water
Procedia PDF Downloads 3372369 The Investigation of Relationship between Accounting Information and the Value of Companies
Authors: Golamhassan Ghahramani Aghdam, Pedram Bavili Tabrizi
Abstract:
The aim of this research is to investigate the relationship between accounting information and the value of the companies accepted in Tehran Exchange Market. The dependent variable in this research is the value of a company that is measured by price coefficients, and the independent variables are balance sheet information, profit and loss information, cash flow state information, and profit quality characteristics. The profit quality characteristic index is to be related and to be on-time. This research is an application research, and the research population includes all companies that are active in Tehran exchange market. The number of 194 companies was selected by the systematic method as the statistics sample in the period of 2018-2019. The multi-variable linear regression model was used for the hypotheses test. The results show that there is no relationship between accounting information and companies’ value (stock value) that can be due to the lack of efficiency of the investment market and the inability to use the accounting information by investment market activists.Keywords: accounting information, company value, profit quality characteristics, price coefficient
Procedia PDF Downloads 1392368 Multi-Scale Urban Spatial Evolution Analysis Based on Space Syntax: A Case Study in Modern Yangzhou, China
Authors: Dai Zhimei, Hua Chen
Abstract:
The exploration of urban spatial evolution is an important part of urban development research. Therefore, the evolutionary modern Yangzhou urban spatial texture was taken as the research object, and Spatial Syntax was used as the main research tool, this paper explored Yangzhou spatial evolution law and its driving factors from the urban street network scale, district scale and street scale. The study has concluded that at the urban scale, Yangzhou urban spatial evolution is the result of a variety of causes, including physical and geographical condition, policy and planning factors, and traffic conditions, and the evolution of space also has an impact on social, economic, environmental and cultural factors. At the district and street scales, changes in space will have a profound influence on the history of the city and the activities of people. At the end of the article, the matters needing attention during the evolution of urban space were summarized.Keywords: block, space syntax and methodology, street, urban space, Yangzhou
Procedia PDF Downloads 1812367 Spectral Responses of the Laser Generated Coal Aerosol
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki
Abstract:
Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation
Procedia PDF Downloads 361