Search results for: SURF(Speed-Up Robust Features)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5019

Search results for: SURF(Speed-Up Robust Features)

1749 Circular Labour Migration and Its Consequences in Georgia

Authors: Manana Lobzhanidze

Abstract:

Introduction: The paper will argue that labor migration is the most important problem Georgia faces today. The structure of labor migration by age and gender of Georgia is analyzed. The main driving factors of circular labor migration during the last ten years are identified. While studying migration, it is necessary to discuss the interconnection of economic, social, and demographic features, also taking into consideration the policy of state regulations in terms of education and professional training. Methodology: Different research methods are applied in the presented paper: statistical, such as selection, grouping, observation, trend, and qualitative research methods, namely; analysis, synthesis, induction, deduction, comparison ones. Main Findings: Labour migrants are filling the labor market as a low salary worker. The main positive feedback of migration from developing countries is poverty eradication, but this process is accompanied by problems, such as 'Brain Drain'. The country loses an important part of its intellectual potential, and it is invested by households or state itself. Conclusions: Labor migration is characterized to be temporary, but socio-economic problems of the country often push the labor migration in the direction of longterm and illegal migration. Countries with developed economies try to stricter migration policy and fight illegal migration with different methods; circular migration helps solve this problem. Conclusions and recommendations are included about circular labor migration consequences in Georgia and its influence on the reduction of unemployment level.

Keywords: migration, circular labor migration, labor migration employment, unemployment

Procedia PDF Downloads 157
1748 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks

Authors: Sulemana Ibrahim

Abstract:

Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.

Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks

Procedia PDF Downloads 45
1747 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches

Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.

Abstract:

A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.

Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency

Procedia PDF Downloads 133
1746 Transition from Linear to Circular Business Models with Service Design Methodology

Authors: Minna-Maari Harmaala, Hanna Harilainen

Abstract:

Estimates of the economic value of transitioning to circular economy models vary but it has been estimated to represent $1 trillion worth of new business into the global economy. In Europe alone, estimates claim that adopting circular-economy principles could not only have environmental and social benefits but also generate a net economic benefit of €1.8 trillion by 2030. Proponents of a circular economy argue that it offers a major opportunity to increase resource productivity, decrease resource dependence and waste, and increase employment and growth. A circular system could improve competitiveness and unleash innovation. Yet, most companies are not capturing these opportunities and thus the even abundant circular opportunities remain uncaptured even though they would seem inherently profitable. Service design in broad terms relates to developing an existing or a new service or service concept with emphasis and focus on the customer experience from the onset of the development process. Service design may even mean starting from scratch and co-creating the service concept entirely with the help of customer involvement. Service design methodologies provide a structured way of incorporating customer understanding and involvement in the process of designing better services with better resonance to customer needs. A business model is a depiction of how the company creates, delivers, and captures value; i.e. how it organizes its business. The process of business model development and adjustment or modification is also called business model innovation. Innovating business models has become a part of business strategy. Our hypothesis is that in addition to linear models still being easier to adopt and often with lower threshold costs, companies lack an understanding of how circular models can be adopted into their business and how customers will be willing and ready to adopt the new circular business models. In our research, we use robust service design methodology to develop circular economy solutions with two case study companies. The aim of the process is to not only develop the service concepts and portfolio, but to demonstrate the willingness to adopt circular solutions exists in the customer base. In addition to service design, we employ business model innovation methods to develop, test, and validate the new circular business models further. The results clearly indicate that amongst the customer groups there are specific customer personas that are willing to adopt and in fact are expecting the companies to take a leading role in the transition towards a circular economy. At the same time, there is a group of indifferents, to whom the idea of circularity provides no added value. In addition, the case studies clearly show what changes adoption of circular economy principles brings to the existing business model and how they can be integrated.

Keywords: business model innovation, circular economy, circular economy business models, service design

Procedia PDF Downloads 116
1745 An Improved Discrete Version of Teaching–Learning-Based ‎Optimization for Supply Chain Network Design

Authors: Ehsan Yadegari

Abstract:

While there are several metaheuristics and exact approaches to solving the Supply Chain Network Design (SCND) problem, there still remains an unfilled gap in using the Teaching-Learning-Based Optimization (TLBO) algorithm. The algorithm has demonstrated desirable results with problems with complicated combinational optimization. The present study introduces a Discrete Self-Study TLBO (DSS-TLBO) with priority-based solution representation that can solve a supply chain network configuration model to lower the total expenses of establishing facilities and the flow of materials. The network features four layers, namely suppliers, plants, distribution centers (DCs), and customer zones. It is designed to meet the customer’s demand through transporting the material between layers of network and providing facilities in the best economic Potential locations. To have a higher quality of the solution and increase the speed of TLBO, a distinct operator was introduced that ensures self-adaptation (self-study) in the algorithm based on the four types of local search. In addition, while TLBO is used in continuous solution representation and priority-based solution representation is discrete, a few modifications were added to the algorithm to remove the solutions that are infeasible. As shown by the results of experiments, the superiority of DSS-TLBO compared to pure TLBO, genetic algorithm (GA) and firefly Algorithm (FA) was established.

Keywords: supply chain network design, teaching–learning-based optimization, improved metaheuristics, discrete solution representation

Procedia PDF Downloads 34
1744 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process

Authors: Johannes Gantner, Michael Held, Matthias Fischer

Abstract:

The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.

Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation

Procedia PDF Downloads 274
1743 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 106
1742 Tunable Control of Therapeutics Release from the Nanochannel Delivery System (nDS)

Authors: Thomas Geninatti, Bruno Giacomo, Alessandro Grattoni

Abstract:

Nanofluidic devices have been investigated for over a decade as promising platforms for the controlled release of therapeutics. The nanochannel drug delivery system (nDS), a membrane fabricated with high precision silicon techniques, capable of zero-order release of drugs by exploiting diffusion transport at the nanoscale originated from the interactions between molecules with nanochannel surfaces, showed the flexibility of the sustained release in vitro and in vivo, over periods of time ranging from weeks to months. To improve the implantable bio nanotechnology, in order to create a system that possesses the key features for achieve the suitable release of therapeutics, the next generation of nDS has been created. Platinum electrodes are integrated by e-beam deposition onto both surfaces of the membrane allowing low voltage (<2 V) and active temporal control of drug release through modulation of electrostatic potentials at the inlet and outlet of the membrane’s fluidic channels. Hence, a tunable administration of drugs is ensured from the nanochannel drug delivery system. The membrane will be incorporated into a peek implantable capsule, which will include drug reservoir, control hardware and RF system to allow suitable therapeutic regimens in real-time. Therefore, this new nanotechnology offers tremendous potential solutions to manage chronic disease such as cancer, heart disease, circadian dysfunction, pain and stress.

Keywords: nanochannel membrane, drug delivery, tunable release, personalized administration, nanoscale transport, biomems

Procedia PDF Downloads 299
1741 Endometrial Thickness Cut-Off for Evacuation of Retained Product of Conception

Authors: Nambiar Ritu, Ali Ban, Munawar Farida, Israell Imelda, T. Farouk Eman Rasheeda, Jangalgi Renuka, S. Boma Nellie

Abstract:

Aim: To define the ultrasonographic endometrial thickness (USG ET) cutoff for evacuation of retained pieces of conception (ERPC). Background: Studies of conservative management of 1st trimester miscarriage have questioned the need for post miscarriage curettage. Therapeutic decision making with transvaginal scan post miscarriage endometrial thickness in patients clinically thought to be incomplete miscarriage is often not clear. Method: Retrospective analysis of all 1ST trimester ERPC at Al Rahba Hospital from June 2012 to July 2013 was done. Total of 164 patients underwent ERPC. All cases were reviewed for pre-operative USG ET and post ERPC histopathological examination. TVS was done to evaluate the maximum ET of the uterine cavity along the long axis of the uterus and features of retained products was noted. All cases without preoperative USG ET measurement were excluded from the study, therefore only 62 out of 164 cases were included in the study. The patients were divided into three groups: o Group A: have retained products within endometrial cavity. o Group B: endometrial thickness equal or more than 20 mm. o Group C: endometrial thickness equal or less than 19.9 mm. o Post ERPC product was sent for HPE and the results were compared. Transvaginal sonographic findings can be used as a deciding factor in the management of patients with 1st trimester miscarriage who need ERPC. Our proposed cutoff in clinically stable patients requiring ERPC is more than 20 mm.

Keywords: ERPC, histopathological examination, long axis of the uterus, USG ET

Procedia PDF Downloads 201
1740 Morphological Characterization and Gas Permeation of Commercially Available Alumina Membrane

Authors: Ifeyinwa Orakwe, Ngozi Nwogu, Edward Gobina

Abstract:

This work presents experimental results relating to the structural characterization of a commercially available alumina membrane. A γ-alumina mesoporous tubular membrane has been used. Nitrogen adsorption-desorption, scanning electron microscopy and gas permeability test has been carried out on the alumina membrane to characterize its structural features. Scanning electron microscopy (SEM) was used to determine the pore size distribution of the membrane. Pore size, specific surface area and pore size distribution were also determined with the use of the Nitrogen adsorption-desorption instrument. Gas permeation tests were carried out on the membrane using a variety of single and mixed gases. The permeabilities at different pressure between 0.05-1 bar and temperature range of 25-200oC were used for the single and mixed gases: nitrogen (N2), helium (He), oxygen (O2), carbon dioxide (CO2), 14%CO₂/N₂, 60%CO₂/N₂, 30%CO₂/CH4 and 21%O₂/N₂. Plots of flow rate verses pressure were obtained. Results got showed the effect of temperature on the permeation rate of the various gases. At 0.5 bar for example, the flow rate for N2 was relatively constant before decreasing with an increase in temperature, while for O2, it continuously decreased with an increase in temperature. In the case of 30%CO₂/CH4 and 14%CO₂/N₂, the flow rate showed an increase then a decrease with increase in temperature. The effect of temperature on the membrane performance of the various gases is presented and the influence of the trans membrane pressure drop will be discussed in this paper.

Keywords: alumina membrane, Nitrogen adsorption-desorption, scanning electron microscopy, gas permeation, temperature

Procedia PDF Downloads 310
1739 Epigenetic Drugs for Major Depressive Disorder: A Critical Appraisal of Available Studies

Authors: Aniket Kumar, Jacob Peedicayil

Abstract:

Major depressive disorder (MDD) is a common and important psychiatric disorder. Several clinical features of MDD suggest an epigenetic basis for its pathogenesis. Since epigenetics (heritable changes in gene expression not involving changes in DNA sequence) may underlie the pathogenesis of MDD, epigenetic drugs such as DNA methyltransferase inhibitors (DNMTi) and histone deactylase inhibitors (HDACi) may be useful for treating MDD. The available literature indexed in Pubmed on preclinical drug trials of epigenetic drugs for the treatment of MDD was investigated. The search terms we used were ‘depression’ or ‘depressive’ and ‘HDACi’ or ‘DNMTi’. Among epigenetic drugs, it was found that there were 3 preclinical trials using HDACi and 3 using DNMTi for the treatment of MDD. All the trials were conducted on rodents (mice or rats). The animal models of depression that were used were: learned helplessness-induced animal model, forced swim test, open field test, and the tail suspension test. One study used a genetic rat model of depression (the Flinders Sensitive Line). The HDACi that were tested were: sodium butyrate, compound 60 (Cpd-60), and valproic acid. The DNMTi that were tested were: 5-azacytidine and decitabine. Among the three preclinical trials using HDACi, all showed an antidepressant effect in animal models of depression. Among the 3 preclinical trials using DNMTi also, all showed an antidepressant effect in animal models of depression. Thus, epigenetic drugs, namely, HDACi and DNMTi, may prove to be useful in the treatment of MDD and merit further investigation for the treatment of this disorder.

Keywords: DNA methylation, drug discovery, epigenetics, major depressive disorder

Procedia PDF Downloads 173
1738 Traffic Prediction with Raw Data Utilization and Context Building

Authors: Zhou Yang, Heli Sun, Jianbin Huang, Jizhong Zhao, Shaojie Qiao

Abstract:

Traffic prediction is essential in a multitude of ways in modern urban life. The researchers of earlier work in this domain carry out the investigation chiefly with two major focuses: (1) the accurate forecast of future values in multiple time series and (2) knowledge extraction from spatial-temporal correlations. However, two key considerations for traffic prediction are often missed: the completeness of raw data and the full context of the prediction timestamp. Concentrating on the two drawbacks of earlier work, we devise an approach that can address these issues in a two-phase framework. First, we utilize the raw trajectories to a greater extent through building a VLA table and data compression. We obtain the intra-trajectory features with graph-based encoding and the intertrajectory ones with a grid-based model and the technique of back projection that restore their surrounding high-resolution spatial-temporal environment. To the best of our knowledge, we are the first to study direct feature extraction from raw trajectories for traffic prediction and attempt the use of raw data with the least degree of reduction. In the prediction phase, we provide a broader context for the prediction timestamp by taking into account the information that are around it in the training dataset. Extensive experiments on several well-known datasets have verified the effectiveness of our solution that combines the strength of raw trajectory data and prediction context. In terms of performance, our approach surpasses several state-of-the-art methods for traffic prediction.

Keywords: traffic prediction, raw data utilization, context building, data reduction

Procedia PDF Downloads 111
1737 Neotectonic Features of the Fethiye-Burdur Fault Zone between Kozluca and Burdur, SW Anatolia, Turkey

Authors: Berkant Coşkuner, Rahmi Aksoy

Abstract:

The aim of this study is to present some preliminary stratigraphic and structural evidence for the Fethiye-Burdur fault zone between Kozluca and Burdur. The Fethiye-Burdur fault zone, the easternmost extension of the west Anatolian extensional province, extends from the Gulf of Fethiye northeastward through Burdur, a distance of about 300 km. The research area is located in the Burdur segment of the fault zone. Here, the fault zone includes several parallel to subparallel fault branching and en-echelon faults that lie within a linear belt, as much as 20 km in width. The direction of movement in the fault zone has been oblique-slip in the left lateral sense. The basement of the study area consists of the Triassic-Eocene Lycian Nappes, the Eocene-Oligocene molasse sediments and the lower Miocene marine rocks. The Burdur basin contains two basin infills. The ancient and deformed basin fill is composed of lacustrine sediments of the upper Miocene-lower Pliocene age. The younger and undeformed basin fill comprises Plio-Quaternary alluvial fan and recent basin-floor deposits and unconformably overlies the ancient basin infill. The Burdur basin is bounded by the NE-SW trending, left lateral oblique-slip normal faults, the Karakent fault on the northwest and the Burdur fault on the southeast. These faults played a key role in the development of the Burdur basin as a pull-apart basin.

Keywords: Burdur basin, Fethiye-Burdur fault zone, left lateral oblique-slip fault, Western Anatolia

Procedia PDF Downloads 394
1736 Double Negative Differential Resistance Features in Series AIN/GaN Double-Barrier Resonant Tunneling Diodes Vertically Integrated by Plasma-Assisted Molecular Beam Epitaxy

Authors: Jiajia Yao, Guanlin Wu, Fang Liu, Junshuai Xue, Yue Hao

Abstract:

This study reports on the epitaxial growth of a GaN-based resonant tunneling diode (RTD) structure with stable and repeatable double negative differential resistance (NDR) characteristics at room temperature on a c-plane GaN-on-sapphire template using plasma-assisted molecular beam epitaxy (PA-MBE) technology. In this structure, two independent AlN/GaN RTDs are epitaxially connected in series in the vertical growth direction through a silicon-doped GaN layer. As the collector electrode bias voltage increases, the two RTDs respectively align the ground state energy level in the quantum well with the 2DEG energy level in the emitter accumulation well to achieve quantum resonant tunneling and then reach the negative differential resistance (NDR) region. The two NDR regions exhibit similar peak current densities and peak-to-valley current ratios, which are 230 kA/cm² and 249 kA/cm², 1.33 and 1.38, respectively, for a device with a collector electrode mesa diameter of 1 µm. The consistency of the NDR is much higher than the results of on-chip discrete RTD device interconnection, resulting from the smaller chip area, fewer interconnect parasitic parameters, and less process complexity. The methods and results presented in this paper show the brilliant prospects of GaN RTDs in the development of multi-value logic digital circuits.

Keywords: MBE, AlN/GaN, RTDs, double NDR

Procedia PDF Downloads 46
1735 Flight School Perceptions of Electric Planes for Training

Authors: Chelsea-Anne Edwards, Paul Parker

Abstract:

Flight school members are facing a major disruption in the technologies available for them to fly as electric planes enter the aviation industry. The year 2020 marked a new era in aviation with the first type certification of an electric plane. The Pipistrel Velis Electro is a two-seat electric aircraft (e-plane) designed for flight training. Electric flight training has the potential to deeply reduce emissions, noise, and cost of pilot training. Though these are all attractive features, understanding must be developed on the perceptions of the essential actor of the technology, the pilot. This study asks student pilots, flight instructors, flight center managers, and other members of flight schools about their perceptions of e-planes. The questions were divided into three categories: safety and trust of the technology, expected costs in comparison to conventional planes, and interest in the technology, including their desire to fly electric planes. Participants were recruited from flight schools using a protocol approved by the Office of Research Ethics. None of these flight schools have an e-plane in their fleet so these views are based on perceptions rather than direct experience. The results revealed perceptions that were strongly positive with many qualitative comments indicating great excitement about the potential of the new electric aviation technology. Some concerns were raised regarding battery endurance limits. Overall, the flight school community is clearly in favor of introducing electric propulsion technology and reducing the environmental impacts of their industry.

Keywords: electric planes, flight training, green aircraft, student pilots, sustainable aviation

Procedia PDF Downloads 151
1734 Design-Analysis and Optimization of 10 MW Permanent Magnet Surface Mounted Off-Shore Wind Generator

Authors: Mamidi Ramakrishna Rao, Jagdish Mamidi

Abstract:

With advancing technology, the market environment for wind power generation systems has become highly competitive. The industry has been moving towards higher wind generator power ratings, in particular, off-shore generator ratings. Current off-shore wind turbine generators are in the power range of 10 to 12 MW. Unlike traditional induction motors, slow-speed permanent magnet surface mounted (PMSM) high-power generators are relatively challenging and designed differently. In this paper, PMSM generator design features have been discussed and analysed. The focus attention is on armature windings, harmonics, and permanent magnet. For the power ratings under consideration, the generator air-gap diameters are in the range of 8 to 10 meters, and active material weigh ~60 tons and above. Therefore, material weight becomes one of the critical parameters. Particle Swarm Optimization (PSO) technique is used for weight reduction and performance improvement. Four independent variables have been considered, which are air gap diameter, stack length, magnet thickness, and winding current density. To account for core and teeth saturation, preventing demagnetization effects due to short circuit armature currents, and maintaining minimum efficiency, suitable penalty functions have been applied. To check for performance satisfaction, a detailed analysis and 2D flux plotting are done for the optimized design.

Keywords: offshore wind generator, PMSM, PSO optimization, design optimization

Procedia PDF Downloads 135
1733 A Method for Reconfigurable Manufacturing Systems Customization Measurement

Authors: Jesus Kombaya, Nadia Hamani, Lyes Kermad

Abstract:

The preservation of a company’s place on the market in such aggressive competition is becoming a survival challenge for manufacturers. In this context, survivors are only those who succeed to satisfy their customers’ needs as quickly as possible. The production system should be endowed with a certain level of flexibility to eliminate or reduce the rigidity of the production systems in order to facilitate the conversion and/or the change of system’s features to produce different products. Therefore, it is essential to guarantee the quality, the speed and the flexibility to survive in this competition. According to literature, this adaptability is referred to as the notion of "change". Indeed, companies are trying to establish a more flexible and agile manufacturing system through several reconfiguration actions. Reconfiguration contributes to the extension of the manufacturing system life cycle by modifying its physical, organizational and computer characteristics according to the changing market conditions. Reconfigurability is characterized by six key elements that are: modularity, integrability, diagnosability, convertibility, scalability and customization. In order to control the production systems, it is essential for manufacturers to make good use of this capability in order to be sure that the system has an optimal and adapted level of reconfigurability that allows it to produce in accordance with the set requirements. This document develops a measure of customization of reconfigurable production systems. These measures do not only impact the production system but also impact the product design and the process design, which can therefore serve as a guide for the customization of manufactured product. A case study is presented to show the use of the proposed approach.

Keywords: reconfigurable manufacturing systems, customization, measure, flexibility

Procedia PDF Downloads 109
1732 In Life: Space as Doppelganger in “The House of Usher”

Authors: Tuğçe Arslan

Abstract:

In the dark and gloomy times of the Middle Ages, high, majestic, and frightening structures were revealed in the architectural field. Thus, gothic architecture began to find a place for itself in different areas and spread its influence. Gothic has found its place in almost every literary genre and manages to show itself as the dominant genre in the works it enters. It has exploited many concepts, such as a chest full of bad feelings, and creates a gloomy, scary, frightening, and pessimistic mood in the story with these concepts. One of the essential concepts it uses while creating these feelings is the concept of “Doppelganger.” With this concept, the authors make sense of the uncanny; at this point, they allow the spaces to act like characters, just like the uncanny feeling Edgar Allan Poe creates in his story “The Fall of the House of the Usher.” In this story by Edgar Allan Poe, attention should be paid to the symbolic link between the two, as “House of Usher” refers to the family and the building. And indeed, it is possible to see this minor rift as representative of a breakdown in family unity, specifically between Madeline and Roderick. Because although the home is not alive, it has some supernatural features that make it look like a living, breathing being. Therefore, the remainder of this paper will argue that apart from the apparent twins, the house should also qualify as a Doppelganger in the story. This study will first explore the physical and mental disorders of the twins and their journey to complement each other; next, in an attempt to demonstrate how the house as a non-living needs to be considered as a Doppelganger of the twins, a close reading on the house depictions will be scrutinized.

Keywords: Edgar Allan Poe, doppelganger, uncanny, gothic, space, home

Procedia PDF Downloads 102
1731 A Palmprint Identification System Based Multi-Layer Perceptron

Authors: David P. Tantua, Abdulkader Helwan

Abstract:

Biometrics has been recently used for the human identification systems using the biological traits such as the fingerprints and iris scanning. Identification systems based biometrics show great efficiency and accuracy in such human identification applications. However, these types of systems are so far based on some image processing techniques only, which may decrease the efficiency of such applications. Thus, this paper aims to develop a human palmprint identification system using multi-layer perceptron neural network which has the capability to learn using a backpropagation learning algorithms. The developed system uses images obtained from a public database available on the internet (CASIA). The processing system is as follows: image filtering using median filter, image adjustment, image skeletonizing, edge detection using canny operator to extract features, clear unwanted components of the image. The second phase is to feed those processed images into a neural network classifier which will adaptively learn and create a class for each different image. 100 different images are used for training the system. Since this is an identification system, it should be tested with the same images. Therefore, the same 100 images are used for testing it, and any image out of the training set should be unrecognized. The experimental results shows that this developed system has a great accuracy 100% and it can be implemented in real life applications.

Keywords: biometrics, biological traits, multi-layer perceptron neural network, image skeletonizing, edge detection using canny operator

Procedia PDF Downloads 358
1730 Location3: A Location Scouting Platform for the Support of Film and Multimedia Industries

Authors: Dimitrios Tzilopoulos, Panagiotis Symeonidis, Michael Loufakis, Dimosthenis Ioannidis, Dimitrios Tzovaras

Abstract:

The domestic film industry in Greece has traditionally relied heavily on state support. While film productions are crucial for the country's economy, it has not fully capitalized on attracting and promoting foreign productions. The lack of motivation, organized state support for attraction and licensing, and the absence of location scouting have hindered its potential. Although recent legislative changes have addressed the first two of these issues, the development of a comprehensive location database and a search engine that would effectively support location scouting at the pre-production location scouting is still in its early stages. In addition to the expected benefits of the film, television, marketing, and multimedia industries, a location-scouting service platform has the potential to yield significant financial gains locally and nationally. By promoting featured places like cultural and archaeological sites, natural monuments, and attraction points for visitors, it plays a vital role in both cultural promotion and facilitating tourism development. This study introduces LOCATION3, an internet platform revolutionizing film production location management. It interconnects location providers, film crews, and multimedia stakeholders, offering a comprehensive environment for seamless collaboration. The platform's central geodatabase (PostgreSQL) stores each location’s attributes, while web technologies like HTML, JavaScript, CSS, React.js, and Redux power the user-friendly interface. Advanced functionalities, utilizing deep learning models, developed in Python, are integrated via Node.js. Visual data presentation is achieved using the JS Leaflet library, delivering an interactive map experience. LOCATION3 sets a new standard, offering a range of essential features to enhance the management of film production locations. Firstly, it empowers users to effortlessly upload audiovisual material enriched with geospatial and temporal data, such as location coordinates, photographs, videos, 360-degree panoramas, and 3D location models. With the help of cutting-edge deep learning algorithms, the application automatically tags these materials, while users can also manually tag them. Moreover, the application allows users to record locations directly through its user-friendly mobile application. Users can then embark on seamless location searches, employing spatial or descriptive criteria. This intelligent search functionality considers a combination of relevant tags, dominant colors, architectural characteristics, emotional associations, and unique location traits. One of the application's standout features is the ability to explore locations by their visual similarity to other materials, facilitated by a reverse image search. Also, the interactive map serves as both a dynamic display for locations and a versatile filter, adapting to the user's preferences and effortlessly enhancing location searches. To further streamline the process, the application facilitates the creation of location lightboxes, enabling users to efficiently organize and share their content via email. Going above and beyond location management, the platform also provides invaluable liaison, matchmaking, and online marketplace services. This powerful functionality bridges the gap between visual and three-dimensional geospatial material providers, local agencies, film companies, production companies, etc. so that those interested in a specific location can access additional material beyond what is stored on the platform, as well as access production services supporting the functioning and completion of productions in a location (equipment provision, transportation, catering, accommodation, etc.).

Keywords: deep learning models, film industry, geospatial data management, location scouting

Procedia PDF Downloads 56
1729 The Interoperability between CNC Machine Tools and Robot Handling Systems Based on an Object-Oriented Framework

Authors: Pouyan Jahanbin, Mahmoud Houshmand, Omid Fatahi Valilai

Abstract:

A flexible manufacturing system (FMS) is a manufacturing system having the capability of handling the variations of products features that is the result of ever-changing customer demands. The flexibility of the manufacturing systems help to utilize the resources in a more effective manner. However, the control of such systems would be complicated and challenging. FMS needs CNC machines and robots and other resources for establishing the flexibility and enhancing the efficiency of the whole system. Also it needs to integrate the resources to reach required efficiency and flexibility. In order to reach this goal, an integrator framework is proposed in which the machining data of CNC machine tools is received through a STEP-NC file. The interoperability of the system is achieved by the information system. This paper proposes an information system that its data model is designed based on object oriented approach and is implemented through a knowledge-based system. The framework is connected to a database which is filled with robot’s control commands. The framework programs the robots by rules embedded in its knowledge based system. It also controls the interactions of CNC machine tools for loading and unloading actions by robot. As a result, the proposed framework improves the integration of manufacturing resources in Flexible Manufacturing Systems.

Keywords: CNC machine tools, industrial robots, knowledge-based systems, manufacturing recourses integration, flexible manufacturing system (FMS), object-oriented data model

Procedia PDF Downloads 440
1728 Exploring Legal Liabilities of Mining Companies for Human Rights Abuses: Case Study of Mongolian Mine

Authors: Azzaya Enkhjargal

Abstract:

Context: The mining industry has a long history of human rights abuses, including forced labor, environmental pollution, and displacement of communities. In recent years, there has been growing international pressure to hold mining companies accountable for these abuses. Research Aim: This study explores the legal liabilities of mining companies for human rights abuses. The study specifically examines the case of Erdenet Mining Corporation (EMC), a large mining company in Mongolia that has been accused of human rights abuses. Methodology: The study used a mixed-methods approach, which included a review of legal literature, interviews with community members and NGOs, and a case study of EMC. Findings: The study found that mining companies can be held liable for human rights abuses under a variety of regulatory frameworks, including soft law and self-regulatory instruments in the mining industry, international law, national law, and corporate law. The study also found that there are a number of challenges to holding mining companies accountable for human rights abuses, including the lack of effective enforcement mechanisms and the difficulty of proving causation. Theoretical Importance: The study contributes to the growing body of literature on the legal liabilities of mining companies for human rights abuses. The study also provides insights into the challenges of holding mining companies accountable for human rights abuses. Data Collection: The data for the study was collected through a variety of methods, including a review of legal literature, interviews with community members and NGOs, and a case study of EMC. Analysis Procedures: The data was analyzed using a variety of methods, including content analysis, thematic analysis, and case study analysis. Conclusion: The study concludes that mining companies can be held liable for human rights abuses under a variety of legal and regulatory frameworks. There are positive developments in ensuring greater accountability and protection of affected communities and the environment in countries with a strong economy. Regrettably, access to avenues of redress is reasonably low in less developed countries, where the governments have not implemented a robust mechanism to enforce liability requirements in the mining industry. The study recommends that governments and mining companies take more ambitious steps to enhance corporate accountability.

Keywords: human rights, human rights abuses, ESG, litigation, Erdenet Mining Corporation, corporate social responsibility, soft law, self-regulation, mining industry, parent company liability, sustainability, environment, UN

Procedia PDF Downloads 61
1727 Engaging Students with Special Education Needs through Technology-Enhanced Interactive Activities in Class

Authors: Pauli P.Y. Lai

Abstract:

Students with Special Education Needs (SEN) face many challenges in learning. Various challenges include difficulty in handwriting, slow understanding and assimilation, difficulty in paying attention during class, and lack of communication skills. To engage students with Special Education Needs in class with general students, Blackboard Collaborate is used as a teaching and learning tool to deliver a lecture with interactive activities. Blackboard Collaborate provides a good platform to create and enhance active, collaborative and interactive learning experience whereby the SEN students can easily interact with their general peers and the instructor by using the features of drawing on a virtual whiteboard, file sharing, classroom chatter, breakout room, hand-raising feature, polling, etc. By integrating a blended learning approach with Blackboard Collaborate, the students with Special Education Needs could engage in interactive activities with ease in class. Our research aims at exploring and discovering the use of Blackboard Collaborate for inclusive education based on a qualitative design with in-depth interviews. Being served in a general education environment, three university students with different kinds of learning disabilities have participated in our study. All participants agreed that functions provided by Blackboard Collaborate have enhanced their learning experiences and helped them learn better. Their academic performances also showed that SEN students could perform well with the help of technology. This research studies different aspects of using Blackboard Collaborate to create an inclusive learning environment for SEN students.

Keywords: blackboard collaborate, enhanced learning experience, inclusive education, special education needs

Procedia PDF Downloads 118
1726 Geometrical Analysis of Tiling Patterns in Azari Style: The Case of Tabriz Kaboud Mosque

Authors: Seyyedeh Faezeh Miralami, Sahar Sayyadchapari, Mona Laleh, Zahra Poursafar

Abstract:

Tiling patterns are magnificent display of decoration in Islamic period. They transform the dusty and dreary facades to splendid and ornate ones. Due to ideological factors and elements of Azari style decorations, geometrical patterns and vegetative designs became prevalent and pervasive in religious sites like mosques. Objectives: The objective of this research is a study of tiling patterns in Tabriz Kaboud mosque, as a splendid work of architecture in Azari style. In this study, the geometrical designs and tiling patterns employed in the mosque decorations are examined and analyzed. Method: The research is based on a descriptive analysis method. Data and information are collected based on documents library and field study. Then, polished and brushed, the study resulted in an illustrative conclusion. Findings: In religious sites such as mosques, geometry represents ‘divination’ in Christian theology and ‘Unity with God’ or ‘Tawhid’ in Islamic terminology. In other words, science, literature, architecture, and all forms of human expression and representation are pointed towards one cause, unity or divination. Tiling patterns of Kaboud Mosque, mostly hexagonal, circular, square and triangle, form outstanding architectonic features which recount a story, a narration of divination or unification with the One.

Keywords: tiling, Azari style, Tabriz Kaboud Mosque, Islamic architecture

Procedia PDF Downloads 303
1725 Comparison of Regime Transition between Ellipsoidal and Spherical Particle Assemblies in a Model Shear Cell

Authors: M. Hossain, H. P. Zhu, A. B. Yu

Abstract:

This paper presents a numerical investigation of regime transition of flow of ellipsoidal particles and a comparison with that of spherical particle assembly. Particle assemblies constituting spherical and ellipsoidal particle of 2.5:1 aspect ratio are examined at separate instances in similar flow conditions in a shear cell model that is numerically developed based on the discrete element method. Correlations among elastically scaled stress, kinetically scaled stress, coordination number and volume fraction are investigated, and show important similarities and differences for the spherical and ellipsoidal particle assemblies. In particular, volume fractions at points of regime transition are identified for both types of particles. It is found that compared with spherical particle assembly, ellipsoidal particle assembly has higher volume fraction for the quasistatic to intermediate regime transition and lower volume fraction for the intermediate to inertial regime transition. Finally, the relationship between coordination number and volume fraction shows strikingly distinct features for the two cases, suggesting that different from spherical particles, the effect of the shear rate on the coordination number is not significant for ellipsoidal particles. This work provides a glimpse of currently running work on one of the most attractive scopes of research in this field and has a wide prospect in understanding rheology of more complex shaped particles in light of the strong basis of simpler spherical particle rheology.

Keywords: DEM, granular rheology, non-spherical particles, regime transition

Procedia PDF Downloads 254
1724 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 116
1723 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor

Authors: Feng Tao, Han Ye, Shaoyi Liao

Abstract:

City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.

Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI

Procedia PDF Downloads 284
1722 Energy Efficient Plant Design Approaches: Case Study of the Sample Building of the Energy Efficiency Training Facilities

Authors: Idil Kanter Otcu

Abstract:

Nowadays, due to the growing problems of energy supply and the drastic reduction of natural non-renewable resources, the development of new applications in the energy sector and steps towards greater efficiency in energy consumption are required. Since buildings account for a large share of energy consumption, increasing the structural density of buildings causes an increase in energy consumption. This increase in energy consumption means that energy efficiency approaches to building design and the integration of new systems using emerging technologies become necessary in order to curb this consumption. As new systems for productive usage of generated energy are developed, buildings that require less energy to operate, with rational use of resources, need to be developed. One solution for reducing the energy requirements of buildings is through landscape planning, design and application. Requirements such as heating, cooling and lighting can be met with lower energy consumption through planting design, which can help to achieve more efficient and rational use of resources. Within this context, rather than a planting design which considers only the ecological and aesthetic features of plants, these considerations should also extend to spatial organization whereby the relationship between the site and open spaces in the context of climatic elements and planting designs are taken into account. In this way, the planting design can serve an additional purpose. In this study, a landscape design which takes into consideration location, local climate morphology and solar angle will be illustrated on a sample building project.

Keywords: energy efficiency, landscape design, plant design, xeriscape landscape

Procedia PDF Downloads 247
1721 Cell Line Screens Identify Biomarkers of Drug Sensitivity in GLIOMA Cancer

Authors: Noora Al Muftah, Reda Rawi, Richard Thompson, Halima Bensmail

Abstract:

Clinical responses to anticancer therapies are often restricted to a subset of patients. In some cases, mutated cancer genes are potent biomarkers of response to targeted agents. There is an urgent need to identify biomarkers that predict which patients with are most likely to respond to treatment. Systematic efforts to correlate tumor mutational data with biologic dependencies may facilitate the translation of somatic mutation catalogs into meaningful biomarkers for patient stratification. To identify genomic features associated with drug sensitivity and uncover new biomarkers of sensitivity and resistance to cancer therapeutics, we have screened and integrated a panel of several hundred cancer cell lines from different databases, mutation, DNA copy number, and gene expression data for hundreds of cell lines with their responses to targeted and cytotoxic therapies with drugs under clinical and preclinical investigation. We found mutated cancer genes were associated with cellular response to most currently available Glioma cancer drugs and some frequently mutated genes were associated with sensitivity to a broad range of therapeutic agents. By linking drug activity to the functional complexity of cancer genomes, systematic pharmacogenomic profiling in cancer cell lines provides a powerful biomarker discovery platform to guide rational cancer therapeutic strategies.

Keywords: cancer, gene network, Lasso, penalized regression, P-values, unbiased estimator

Procedia PDF Downloads 393
1720 An Examination of Earnings Management by Publicly Listed Targets Ahead of Mergers and Acquisitions

Authors: T. Elrazaz

Abstract:

This paper examines accrual and real earnings management by publicly listed targets around mergers and acquisitions. Prior literature shows that earnings management around mergers and acquisitions can have a significant economic impact because of the associated wealth transfers among stakeholders. More importantly, acting on behalf of their shareholders or pursuing their self-interests, managers of both targets and acquirers may be equally motivated to manipulate earnings prior to an acquisition to generate higher gains for their shareholders or themselves. Building on the grounds of information asymmetry, agency conflicts, stewardship theory, and the revelation principle, this study addresses the question of whether takeover targets employ accrual and real earnings management in the periods prior to the announcement of Mergers and Acquisitions (M&A). Additionally, this study examines whether acquirers are able to detect targets’ earnings management, and in response, adjust the acquisition premium paid in order not to face the risk of overpayment. This study uses an aggregate accruals approach in estimating accrual earnings management as proxied by estimated abnormal accruals. Additionally, real earnings management is proxied for by employing widely used models in accounting and finance literature. The results of this study indicate that takeover targets manipulate their earnings using accruals in the second year with an earnings release prior to the announcement of the M&A. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals. These results are consistent with the argument that targets of cash-only or mixed-payment deals do not have the same strong motivations to manage their earnings as their stock-financed deals counterparts do additionally supporting the findings of prior studies that the method of payment in takeovers is value relevant. The findings of this study also indicate that takeover targets manipulate earnings upwards through cutting discretionary expenses the year prior to the acquisition while they do not do so by manipulating sales or production costs. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals, providing further robustness to the results derived under the accrual-based models. Finally, this study finds evidence suggesting that acquirers are fully aware of the accrual-based techniques employed by takeover targets and can unveil such manipulation practices. These results are robust to alternative accrual and real earnings management proxies, as well as controlling for the method of payment in the deal.

Keywords: accrual earnings management, acquisition premium, real earnings management, takeover targets

Procedia PDF Downloads 99