Search results for: evolutionary mismatch
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 492

Search results for: evolutionary mismatch

192 Single Pole-To-Earth Fault Detection and Location on the Tehran Railway System Using ICA and PSO Trained Neural Network

Authors: Masoud Safarishaal

Abstract:

Detecting the location of pole-to-earth faults is essential for the safe operation of the electrical system of the railroad. This paper aims to use a combination of evolutionary algorithms and neural networks to increase the accuracy of single pole-to-earth fault detection and location on the Tehran railroad power supply system. As a result, the Imperialist Competitive Algorithm (ICA) and Particle Swarm Optimization (PSO) are used to train the neural network to improve the accuracy and convergence of the learning process. Due to the system's nonlinearity, fault detection is an ideal application for the proposed method, where the 600 Hz harmonic ripple method is used in this paper for fault detection. The substations were simulated by considering various situations in feeding the circuit, the transformer, and typical Tehran metro parameters that have developed the silicon rectifier. Required data for the network learning process has been gathered from simulation results. The 600Hz component value will change with the change of the location of a single pole to the earth's fault. Therefore, 600Hz components are used as inputs of the neural network when fault location is the output of the network system. The simulation results show that the proposed methods can accurately predict the fault location.

Keywords: single pole-to-pole fault, Tehran railway, ICA, PSO, artificial neural network

Procedia PDF Downloads 105
191 Urban Sustainable Development Based on Habitat Quality Evolution: A Case Study in Chongqing, China

Authors: Jing Ren, Kun Wu

Abstract:

Over the last decade or so, China's urbanization has shown a rapid development trend. At the same time, it has also had a great negative impact on the habitat quality. Therefore, it is of great significance to study the impact of land use change on the level of habitat quality in mountain cities for sustainable urban development. This paper analyzed the spatial and temporal land use changes in Chongqing from 2010 to 2020 using ArcGIS 10.6, as well as the evolutionary trend of habitat quality during this period based on the InVEST 3.13.0, to obtain the impact of land use changes on habitat quality. The results showed that the habitat quality in the western part of Chongqing decreased significantly between 2010 and 2020, while the northeastern and southeastern parts remained stable. The main reason for this is the continuous expansion of urban construction land in the western area, which leads to serious habitat fragmentation and the continuous decline of habitat quality. while, in the northeast and southeast areas, due to the greater emphasis on ecological priority and urban-rural coordination in the development process, land use change is characterized by a benign transfer, which maintains the urbanization process while maintaining the coordinated development of habitat quality. This study can provide theoretical support for the sustainable development of mountain cities.

Keywords: mountain cities, ecological environment, habitat quality, sustainable development

Procedia PDF Downloads 66
190 A Bibliometric Analysis: An Integrative Systematic Review through the Paths of Vitiviniculture

Authors: Patricia Helena Dos Santos Martins, Mateus Atique, Lucas Oliveira Gomes Ferreira

Abstract:

There is a growing body of literature that recognizes the importance of bibliometric analysis through the evolutionary nuances of a specific field while shedding light on the emerging areas in that field. Surprisingly, its application in the manufacturing research of vitiviniculture is relatively new and, in many instances, underdeveloped. The aim of this study is to present an overview of the bibliometric methodology, with a particular focus on the Meta-Analytical Approach Theory model – TEMAC, while offering step-by-step results on the available techniques and procedures for carrying out studies about the elements associated with vitiviniculture. Where TEMAC is a method that uses metadata to generate heat maps, graphs of keyword relationships and others, with the aim of revealing relationships between authors, articles and mainly to understand how the topic has evolved over the period study and thus reveal which subthemes were worked on, main techniques and applications, helping to understand that topic under study and guide researchers in generating new research. From the studies carried out using TEMAC, it is possible to raise which are the techniques within the statistical control of processes that are most used within the wine industry and thus assist professionals in the area in the application of the best techniques. It is expected that this paper will be a useful resource for gaining insights into the available techniques and procedures for carrying out studies about vitiviniculture, the cultivation of vineyards, the production of wine, and all the ethnography connected with it.

Keywords: TEMAC, vitiviniculture, statical control of process, quality

Procedia PDF Downloads 101
189 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English

Authors: Duong Thuy Nguyen, Giulia Bencini

Abstract:

The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.

Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing

Procedia PDF Downloads 140
188 Streamlining .NET Data Access: Leveraging JSON for Data Operations in .NET

Authors: Tyler T. Procko, Steve Collins

Abstract:

New features in .NET (6 and above) permit streamlined access to information residing in JSON-capable relational databases, such as SQL Server (2016 and above). Traditional methods of data access now comparatively involve unnecessary steps which compromise system performance. This work posits that the established ORM (Object Relational Mapping) based methods of data access in applications and APIs result in common issues, e.g., object-relational impedance mismatch. Recent developments in C# and .NET Core combined with a framework of modern SQL Server coding conventions have allowed better technical solutions to the problem. As an amelioration, this work details the language features and coding conventions which enable this streamlined approach, resulting in an open-source .NET library implementation called Codeless Data Access (CODA). Canonical approaches rely on ad-hoc mapping code to perform type conversions between the client and back-end database; with CODA, no mapping code is needed, as JSON is freely mapped to SQL and vice versa. CODA streamlines API data access by improving on three aspects of immediate concern to web developers, database engineers and cybersecurity professionals: Simplicity, Speed and Security. Simplicity is engendered by cutting out the “middleman” steps, effectively making API data access a whitebox, whereas traditional methods are blackbox. Speed is improved because of the fewer translational steps taken, and security is improved as attack surfaces are minimized. An empirical evaluation of the speed of the CODA approach in comparison to ORM approaches ] is provided and demonstrates that the CODA approach is significantly faster. CODA presents substantial benefits for API developer workflows by simplifying data access, resulting in better speed and security and allowing developers to focus on productive development rather than being mired in data access code. Future considerations include a generalization of the CODA method and extension outside of the .NET ecosystem to other programming languages.

Keywords: API data access, database, JSON, .NET core, SQL server

Procedia PDF Downloads 56
187 A Weighted Sum Particle Swarm Approach (WPSO) Combined with a Novel Feasibility-Based Ranking Strategy for Constrained Multi-Objective Optimization of Compact Heat Exchangers

Authors: Milad Yousefi, Moslem Yousefi, Ricarpo Poley, Amer Nordin Darus

Abstract:

Design optimization of heat exchangers is a very complicated task that has been traditionally carried out based on a trial-and-error procedure. To overcome the difficulties of the conventional design approaches especially when a large number of variables, constraints and objectives are involved, a new method based on a well-stablished evolutionary algorithm, particle swarm optimization (PSO), weighted sum approach and a novel constraint handling strategy is presented in this study. Since, the conventional constraint handling strategies are not effective and easy-to-implement in multi-objective algorithms, a novel feasibility-based ranking strategy is introduced which is both extremely user-friendly and effective. A case study from industry has been investigated to illustrate the performance of the presented approach. The results show that the proposed algorithm can find the near pareto-optimal with higher accuracy when it is compared to conventional non-dominated sorting genetic algorithm II (NSGA-II). Moreover, the difficulties of a trial-and-error process for setting the penalty parameters is solved in this algorithm.

Keywords: Heat exchanger, Multi-objective optimization, Particle swarm optimization, NSGA-II Constraints handling.

Procedia PDF Downloads 545
186 Fuzzy Time Series Forecasting Based on Fuzzy Logical Relationships, PSO Technique, and Automatic Clustering Algorithm

Authors: A. K. M. Kamrul Islam, Abdelhamid Bouchachia, Suang Cang, Hongnian Yu

Abstract:

Forecasting model has a great impact in terms of prediction and continues to do so into the future. Although many forecasting models have been studied in recent years, most researchers focus on different forecasting methods based on fuzzy time series to solve forecasting problems. The forecasted models accuracy fully depends on the two terms that are the length of the interval in the universe of discourse and the content of the forecast rules. Moreover, a hybrid forecasting method can be an effective and efficient way to improve forecasts rather than an individual forecasting model. There are different hybrids forecasting models which combined fuzzy time series with evolutionary algorithms, but the performances are not quite satisfactory. In this paper, we proposed a hybrid forecasting model which deals with the first order as well as high order fuzzy time series and particle swarm optimization to improve the forecasted accuracy. The proposed method used the historical enrollments of the University of Alabama as dataset in the forecasting process. Firstly, we considered an automatic clustering algorithm to calculate the appropriate interval for the historical enrollments. Then particle swarm optimization and fuzzy time series are combined that shows better forecasting accuracy than other existing forecasting models.

Keywords: fuzzy time series (fts), particle swarm optimization, clustering algorithm, hybrid forecasting model

Procedia PDF Downloads 238
185 Nurse Metamorphosis: Lived Experience in the RN HEALS Proram

Authors: Dennis Glen G. Ramos, Angelica S. Mendoza, Juliene Marie A. Alvarez, Claudette A. Nagal, Kayzee C. Blanza, Jayson M. Narbonita, John Anthony D. Dayot, Rebecca M. Reduca, Jermaine Jem M. Flojo, Michael E. Resultan, Clyde C. Fomocod, Cindy A. Vinluan, Jeffrie Aleona Mari C. Maclang

Abstract:

RN HEALS, an acronym for Registered Nurses for Health Enhancement and Local Service, is expected to address the shortage of skilled and experienced nurses in 1,221 rural and unserved or underserved communities for one year. The study would like to explore the lived experiences of the nurses deployed under this program.The study is a Descriptive Qualitative Research. Interview was utilized as a data gathering tool. Six community nurses who are deployed under the RN HEALS program are included in the study. Van Kaam method was used as data management. Data gathering was done from October to December 2013.Two themes emerged in the study; Value and Challenge. Under Value, it had three sub-themes; Job Satisfaction, Upholding Competency, including Personal Development and Professional Growth, and Employability. While under Challenge, it had one sub-theme, Job Stress. The study concludes that nurses adapt to strategies to pursue personal and professional competence and an evolutionary journey. The researchers recommend that Health Administrators improve the work environment of nurses to lessen the challenges experienced by nurses.

Keywords: lived experience, RN HEALS, health enhancement, local service

Procedia PDF Downloads 498
184 Resource Leveling Optimization in Construction Projects of High Voltage Substations Using Nature-Inspired Intelligent Evolutionary Algorithms

Authors: Dimitrios Ntardas, Alexandros Tzanetos, Georgios Dounias

Abstract:

High Voltage Substations (HVS) are the intermediate step between production of power and successfully transmitting it to clients, making them one of the most important checkpoints in power grids. Nowadays - renewable resources and consequently distributed generation are growing fast, the construction of HVS is of high importance both in terms of quality and time completion so that new energy producers can quickly and safely intergrade in power grids. The resources needed, such as machines and workers, should be carefully allocated so that the construction of a HVS is completed on time, with the lowest possible cost (e.g. not spending additional cost that were not taken into consideration, because of project delays), but in the highest quality. In addition, there are milestones and several checkpoints to be precisely achieved during construction to ensure the cost and timeline control and to ensure that the percentage of governmental funding will be granted. The management of such a demanding project is a NP-hard problem that consists of prerequisite constraints and resource limits for each task of the project. In this work, a hybrid meta-heuristic method is implemented to solve this problem. Meta-heuristics have been proven to be quite useful when dealing with high-dimensional constraint optimization problems. Hybridization of them results in boost of their performance.

Keywords: hybrid meta-heuristic methods, substation construction, resource allocation, time-cost efficiency

Procedia PDF Downloads 141
183 Ultrasensitive Detection and Discrimination of Cancer-Related Single Nucleotide Polymorphisms Using Poly-Enzyme Polymer Bead Amplification

Authors: Lorico D. S. Lapitan Jr., Yihan Xu, Yuan Guo, Dejian Zhou

Abstract:

The ability of ultrasensitive detection of specific genes and discrimination of single nucleotide polymorphisms is important for clinical diagnosis and biomedical research. Herein, we report the development of a new ultrasensitive approach for label-free DNA detection using magnetic nanoparticle (MNP) assisted rapid target capture/separation in combination with signal amplification using poly-enzyme tagged polymer nanobead. The sensor uses an MNP linked capture DNA and a biotin modified signal DNA to sandwich bind the target followed by ligation to provide high single-nucleotide polymorphism discrimination. Only the presence of a perfect match target DNA yields a covalent linkage between the capture and signal DNAs for subsequent conjugation of a neutravidin-modified horseradish peroxidase (HRP) enzyme through the strong biotin-nuetravidin interaction. This converts each captured DNA target into an HRP which can convert millions of copies of a non-fluorescent substrate (amplex red) to a highly fluorescent product (resorufin), for great signal amplification. The use of polymer nanobead each tagged with thousands of copies of HRPs as the signal amplifier greatly improves the signal amplification power, leading to greatly improved sensitivity. We show our biosensing approach can specifically detect an unlabeled DNA target down to 10 aM with a wide dynamic range of 5 orders of magnitude (from 0.001 fM to 100.0 fM). Furthermore, our approach has a high discrimination between a perfectly matched gene and its cancer-related single-base mismatch targets (SNPs): It can positively detect the perfect match DNA target even in the presence of 100 fold excess of co-existing SNPs. This sensing approach also works robustly in clinical relevant media (e.g. 10% human serum) and gives almost the same SNP discrimination ratio as that in clean buffers. Therefore, this ultrasensitive SNP biosensor appears to be well-suited for potential diagnostic applications of genetic diseases.

Keywords: DNA detection, polymer beads, signal amplification, single nucleotide polymorphisms

Procedia PDF Downloads 241
182 Microsatellite-Based Genetic Variations and Relationships among Some Farmed Nile Tilapia Populations in Ghana: Implications for Nile Tilapia Culture

Authors: Acheampong Addo, Emmanuel Odartei Armah, Seth Koranteng Agyakwah, Ruby Asmah, Emmanuel Tetteh-Doku Mensah, Rhoda Lims Diyie, Sena Amewu, Catherine Ragasa, Edward Kofi Abban, Mike Yaw Osei-Atweneboana

Abstract:

The study investigated genetic variation and relationships among populations of Nile tilapia cultured in small-scale fish farms in selected regions of Ghana. A total of 700 samples were collected. All samples were screened with five microsatellite markers and results were analyzed using (Genetic Analysis in Excel), (Molecular and Evolutionary Genetic Analysis software, and Genpop on the web for Heterozygosity and Shannon diversity, (Analysis of Molecular Variance), and (Principal Coordinate Analysis). Fish from the 16 populations (made up of 14 farms and 2 selectively bred populations) clustered into three groups: 7 populations clustered with the GIFT-derived strain, 4 populations clustered with the Akosombo strain, and three populations were in a separate cluster. The clustering pattern indicated groups of different strains of Nile tilapia cultured. Mantel correlation test also showed low genetic variations among the 16 populations hence the need to boost seed quality in order to accelerate aquaculture production in Ghana.

Keywords: microsatellites, small- scale, Nile tilapia, akosombo strain, GIFT strain

Procedia PDF Downloads 146
181 Evolving Convolutional Filter Using Genetic Algorithm for Image Classification

Authors: Rujia Chen, Ajit Narayanan

Abstract:

Convolutional neural networks (CNN), as typically applied in deep learning, use layer-wise backpropagation (BP) to construct filters and kernels for feature extraction. Such filters are 2D or 3D groups of weights for constructing feature maps at subsequent layers of the CNN and are shared across the entire input. BP as a gradient descent algorithm has well-known problems of getting stuck at local optima. The use of genetic algorithms (GAs) for evolving weights between layers of standard artificial neural networks (ANNs) is a well-established area of neuroevolution. In particular, the use of crossover techniques when optimizing weights can help to overcome problems of local optima. However, the application of GAs for evolving the weights of filters and kernels in CNNs is not yet an established area of neuroevolution. In this paper, a GA-based filter development algorithm is proposed. The results of the proof-of-concept experiments described in this paper show the proposed GA algorithm can find filter weights through evolutionary techniques rather than BP learning. For some simple classification tasks like geometric shape recognition, the proposed algorithm can achieve 100% accuracy. The results for MNIST classification, while not as good as possible through standard filter learning through BP, show that filter and kernel evolution warrants further investigation as a new subarea of neuroevolution for deep architectures.

Keywords: neuroevolution, convolutional neural network, genetic algorithm, filters, kernels

Procedia PDF Downloads 175
180 Artificial Nesting in Birds at UVAS-Ravi Campus: Punjab-Pakistan

Authors: Fatima Chaudhary, Rehan Ul Haq

Abstract:

Spatial and anthropogenic factors influencing nest-site selection in birds need to be identified for effective conservative practices. Environmental attributes such as food availability, predator density, previous reproductive success, etc., provide information regarding the site's quality. An artificial nest box experiment was carried out to evaluate the effect of various factors on nest-site selection, as it is hard to assess the natural cavities. The experiment was conducted whereby half of the boxes were filled with old nest material. Artificial nest boxes created with different materials and different sizes and colors were installed at different heights. A total of 14 out of 60 nest boxes were occupied and four of them faced predation. The birds explored a total of 32 out of 60 nests, whereas anthropogenic factors destroyed 25 out of 60 nests. Birds chose empty nest boxes at higher rates however, there was no obvious avoidance of sites having high ectoparasites load due to old nest material. It is also possible that the preference towards the artificial nest boxes may differ from year to year because of several climatic factors and the age of old nest material affecting the parasite's survival. These variables may fluctuate from one season to another. Considering these factors, nest-site selection experiments concerning the effectiveness of artificial nest boxes should be carried out over several successive seasons. This topic may stimulate further studies, which could lead to a fully understanding the birds' evolutionary ecology. Precise information on these factors influencing nest-site selection can be essential from an economic point of view as well.

Keywords: artificial nesting, nest box, old nest material, birds

Procedia PDF Downloads 76
179 Nasopharyngeal Cancer in Children and Adolescents: Experience of Emir Abdelkader Cancer Center of Oran Algeria

Authors: Taleb L., Benarbia M., Brahmi M., Belmiloud H., Boukerche A.

Abstract:

Introduction and purpose of the study: Cavum cancer in children and adolescents is rare and represents 8% of all nasopharyngeal cancers treated in our department. Our objective is to study its epidemiological, clinical, therapeutic, and evolutionary particularities. Material and methods: Retrospective study of 39 patients under 20 years old, treated for undifferentiated non-metastatic carcinoma of the nasopharynx at the Emir Abdelkader Cancer Center between 2014 and 2020. Results and statistical analysis: Median age was 14 years [7-19 years], with a sex ratio of 2.9. The median time to diagnosis was 5.6 months [1 to 14 months], the circumstances of the discovery of which were dominated by lymph node syndrome in 43.6% of cases (n=17) followed by a rhinological syndrome in 30.8% of cases (n=13). The tumor stage was T1 for two patients (5.1%), T2 for 8 (20.5%), T3 for 9 (23.1%), T4 for 20 (51.3%), N0 for 2 (5 .1%) N1 for 4 (10.3%), N2 for 28 (71.8%) and N3 for 5 (12.8%). All patients received induction chemotherapy followed by concomitant radiotherapy with cisplatin. The dose of irradiation delivered to the cavum and adenopathies was 66 Gy with fractionation of 2 Gy per session in 69.2% of cases (n=27) and 1.8 Gy in 30.8% of cases (n=12). With a median follow-up of 51 months (15 to 97 months), the locoregional, metastatic, specific, and overall relapse-free survival rates at five years were 91.1%, 73.5%, 66.1%, and 68.4, respectively. Conclusion: Chemotherapy and radiotherapy treatment of cavum cancer in children and adolescents has allowed excellent locoregional control despite the advanced stage of the disease. However, the frequency of metastatic relapses could justify the possible use of systemic maintenance treatment.

Keywords: cancer, nasopharynx, radiotherapy, chemotherapy, survival

Procedia PDF Downloads 101
178 Empirical Analysis of the Effect of Cloud Movement in a Basic Off-Grid Photovoltaic System: Case Study Using Transient Response of DC-DC Converters

Authors: Asowata Osamede, Christo Pienaar, Johan Bekker

Abstract:

Mismatch in electrical energy (power) or outage from commercial providers, in general, does not promote development to the public and private sector, these basically limit the development of industries. The necessity for a well-structured photovoltaic (PV) system is of importance for an efficient and cost-effective monitoring system. The major renewable energy potential on earth is provided from solar radiation and solar photovoltaics (PV) are considered a promising technological solution to support the global transformation to a low-carbon economy and reduction on the dependence on fossil fuels. Solar arrays which consist of various PV module should be operated at the maximum power point in order to reduce the overall cost of the system. So power regulation and conditioning circuits should be incorporated in the set-up of a PV system. Power regulation circuits used in PV systems include maximum power point trackers, DC-DC converters and solar chargers. Inappropriate choice of power conditioning device in a basic off-grid PV system can attribute to power loss, hence the need for a right choice of power conditioning device to be coupled with the system of the essence. This paper presents the design and implementation of a power conditioning devices in order to improve the overall yield from the availability of solar energy and the system’s total efficiency. The power conditioning devices taken into consideration in the project includes the Buck and Boost DC-DC converters as well as solar chargers with MPPT. A logging interface circuit (LIC) is designed and employed into the system. The LIC is designed on a printed circuit board. It basically has DC current signalling sensors, specifically the LTS 6-NP. The LIC is consequently required to program the voltages in the system (these include the PV voltage and the power conditioning device voltage). The voltage is structured in such a way that it can be accommodated by the data logger. Preliminary results which include availability of power as well as power loss in the system and efficiency will be presented and this would be used to draw the final conclusion.

Keywords: tilt and orientation angles, solar chargers, PV panels, storage devices, direct solar radiation

Procedia PDF Downloads 123
177 Non-Dominated Sorting Genetic Algorithm (NSGA-II) for the Redistricting Problem in Mexico

Authors: Antonin Ponsich, Eric Alfredo Rincon Garcia, Roman Anselmo Mora Gutierrez, Miguel Angel Gutierrez Andrade, Sergio Gerardo De Los Cobos Silva, Pedro Lara Velzquez

Abstract:

The electoral zone design problem consists in redrawing the boundaries of legislative districts for electoral purposes in such a way that federal or state requirements are fulfilled. In Mexico, this process has been historically carried out by the National Electoral Institute (INE), by optimizing an integer nonlinear programming model, in which population equality and compactness of the designed districts are considered as two conflicting objective functions, while contiguity is included as a hard constraint. The solution technique used by the INE is a Simulated Annealing (SA) based algorithm, which handles the multi-objective nature of the problem through an aggregation function. The present work represents the first intent to apply a classical Multi-Objective Evolutionary Algorithm (MOEA), the second version of the Non-dominated Sorting Genetic Algorithm (NSGA-II), to this hard combinatorial problem. First results show that, when compared with the SA algorithm, the NSGA-II obtains promising results. The MOEA manages to produce well-distributed solutions over a wide-spread front, even though some convergence troubles for some instances constitute an issue, which should be corrected in future adaptations of MOEAs to the redistricting problem.

Keywords: multi-objective optimization, NSGA-II, redistricting, zone design problem

Procedia PDF Downloads 356
176 2106 kA/cm² Peak Tunneling Current Density in GaN-Based Resonant Tunneling Diode with an Intrinsic Oscillation Frequency of ~260GHz at Room Temperature

Authors: Fang Liu, JunShuai Xue, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun

Abstract:

Terahertz spectra is in great demand since last two decades for many photonic and electronic applications. III-Nitride resonant tunneling diode is one of the promising candidates for portable and compact THz sources. Room temperature microwave oscillator based on GaN/AlN resonant tunneling diode was reported in this work. The devices, grown by plasma-assisted molecular-beam epitaxy on free-standing c-plane GaN substrates, exhibit highly repeatable and robust negative differential resistance (NDR) characteristics at room temperature. To improve the interface quality at the active region in RTD, indium surfactant assisted growth is adopted to enhance the surface mobility of metal atoms on growing film front. Thanks to the lowered valley current associated with the suppression of threading dislocation scattering on low dislocation GaN substrate, a positive peak current density of record-high 2.1 MA/cm2 in conjunction with a peak-to-valley current ratio (PVCR) of 1.2 are obtained, which is the best results reported in nitride-based RTDs up to now considering the peak current density and PVCR values simultaneously. When biased within the NDR region, microwave oscillations are measured with a fundamental frequency of 0.31 GHz, yielding an output power of 5.37 µW. Impedance mismatch results in the limited output power and oscillation frequency described above. The actual measured intrinsic capacitance is only 30fF. Using a small-signal equivalent circuit model, the maximum intrinsic frequency of oscillation for these diodes is estimated to be ~260GHz. This work demonstrates a microwave oscillator based on resonant tunneling effect, which can meet the demands of terahertz spectral devices, more importantly providing guidance for the fabrication of the complex nitride terahertz and quantum effect devices.

Keywords: GaN resonant tunneling diode, peak current density, microwave oscillation, intrinsic capacitance

Procedia PDF Downloads 120
175 Climate Change Adaptation in the U.S. Coastal Zone: Data, Policy, and Moving Away from Moral Hazard

Authors: Thomas Ruppert, Shana Jones, J. Scott Pippin

Abstract:

State and federal government agencies within the United States have recently invested substantial resources into studies of future flood risk conditions associated with climate change and sea-level rise. A review of numerous case studies has uncovered several key themes that speak to an overall incoherence within current flood risk assessment procedures in the U.S. context. First, there are substantial local differences in the quality of available information about basic infrastructure, particularly with regard to local stormwater features and essential facilities that are fundamental components of effective flood hazard planning and mitigation. Second, there can be substantial mismatch between regulatory Flood Insurance Rate Maps (FIRMs) as produced by the National Flood Insurance Program (NFIP) and other 'current condition' flood assessment approaches. This is of particular concern in areas where FIRMs already seem to underestimate extant flood risk, which can only be expected to become a greater concern if future FIRMs do not appropriately account for changing climate conditions. Moreover, while there are incentives within the NFIP’s Community Rating System (CRS) to develop enhanced assessments that include future flood risk projections from climate change, the incentive structures seem to have counterintuitive implications that would tend to promote moral hazard. In particular, a technical finding of higher future risk seems to make it easier for a community to qualify for flood insurance savings, with much of these prospective savings applied to individual properties that have the most physical risk of flooding. However, there is at least some case study evidence to indicate that recognition of these issues is prompting broader discussion about the need to move beyond FIRMs as a standalone local flood planning standard. The paper concludes with approaches for developing climate adaptation and flood resilience strategies in the U.S. that move away from the social welfare model being applied through NFIP and toward more of an informed risk approach that transfers much of the investment responsibility over to individual private property owners.

Keywords: climate change adaptation, flood risk, moral hazard, sea-level rise

Procedia PDF Downloads 94
174 Eco-Innovation: Perspectives from a Theoretical Approach and Policy Analysis

Authors: Natasha Hazarika, Xiaoling Zhang

Abstract:

Eco- innovations, unlike regular innovations, are not self-enforcing and are associated with the double externality problem. Therefore, it is emphasized that eco-innovations need government. intervention in the form of supportive policies on priority. Off late, factors like consumer demand, technological advancement as well as the competitiveness of the firms have been considered as equally important. However, the interaction among these driving forces has not been fully traced out. Also, the theory on eco-innovation is found to be at a nascent stage which does not resonate with its dynamics as it is traditionally studied under the neo- classical economics theory. Therefore, to begin with, insights for this research have been derived from the merits of ‘neo- classical economics’, ‘evolutionary approach’, and the ‘resource based view’ which revealed the issues pertaining to technological system lock- ins and firm- based capacities which usually remained undefined by the neo classical approach; it would be followed by determining how the policies (in the national level) and their instruments are designed in order to motivate firms to eco-innovate, by analyzing the innovation ‘friendliness’ of the policy style and the policy instruments as per the indicators provided in innovation literature by means of document review (content analysis) of the relevant policies introduced by the Chinese government. The significance of theoretical analysis lies in its ability to show why certain practices become dominant irrespective of gains or losses, and that of the policy analysis lies in its ability to demonstrate the credibility of govt.’s sticks, carrots and sermons for eco-innovation.

Keywords: firm competency, eco-innovation, policy, theory

Procedia PDF Downloads 171
173 A Semiotic Approach to Vulnerability in Conducting Gesture and Singing Posture

Authors: Johann Van Niekerk

Abstract:

The disciplines of conducting (instrumental or choral) and of singing presume a willingness toward an open posture and, in many cases, demand it for effective communication and technique. Yet, this very openness, with the "spread-eagle" gesture as an extreme, is oftentimes counterintuitive for musicians and within the trajectory of human evolution. Conversely, it is in this very gesture of "taking up space" that confidence-gaining techniques such as the popular "power pose" are based. This paper consists primarily of a literature review, exploring the topics of physical openness and vulnerability, considering the semiotics of the "spread-eagle" and its accompanying letter X. A major finding of this research is the discrepancy between evolutionary instinct towards physical self-protection and “folding in” and the demands of the discipline of physical and gestural openness, expansiveness and vulnerability. A secondary finding is ways in which encouragement of confidence-gaining techniques may be more effective in obtaining the required results than insistence on vulnerability, which is influenced by various cultural contexts and socialization. Choral conductors and music educators are constantly seeking ways to promote engagement and healthy singing. Much of the information and direction toward this goal is gleaned by students from conducting gestures and other pedagogies employed in the rehearsal. The findings of this research provide yet another avenue toward reaching the goals required for sufficient and effective teaching and artistry on the part of instructors and students alike.

Keywords: conducting, gesture, music, pedagogy, posture, vulnerability

Procedia PDF Downloads 67
172 Kirigami Designs for Enhancing the Electromechanical Performance of E-Textiles

Authors: Braden M. Li, Inhwan Kim, Jesse S. Jur

Abstract:

One of the fundamental challenges in the electronic textile (e-textile) industry is the mismatch in compliance between the rigid electronic components integrated onto soft textile platforms. To address these problems, various printing technologies using conductive inks have been explored in an effort to improve the electromechanical performance without sacrificing the innate properties of the printed textile. However, current printing methods deposit densely layered coatings onto textile surfaces with low through-plane wetting resulting in poor electromechanical properties. This work presents an inkjet printing technique in conjunction with unique Kirigami cut designs to address these issues for printed smart textiles. By utilizing particle free reactive silver inks, our inkjet process produces conformal and micron thick silver coatings that surround individual fibers of the printed smart textile. This results in a highly conductive (0.63 Ω sq-1) printed e-textile while also maintaining the innate properties of the textile material including stretchability, flexibility, breathability and fabric hand. Kirigami is the Japanese art of paper cutting. By utilizing periodic cut designs, Kirigami imparts enhanced flexibility and delocalization of stress concentrations. Kirigami cut design parameters (i.e., cut spacing and length) were correlated to both the mechanical and electromechanical properties of the printed textiles. We demonstrate that designs using a higher cut-out ratio exponentially softens the textile substrate. Thus, our designs achieve a 30x improvement in the overall stretchability, 1000x decrease in elastic modulus, and minimal resistance change over strain regimes of 100-200% when compared to uncut designs. We also show minimal resistance change of our Kirigami inspired printed devices after being stretched to 100% for 1000 cycles. Lastly, we demonstrate a Kirigami-inspired electrocardiogram (ECG) monitoring system that improves stretchability without sacrificing signal acquisition performance. Overall this study suggests fundamental parameters affecting the performance of e-textiles and their scalability in the wearable technology industry

Keywords: kirigami, inkjet printing, flexible electronics, reactive silver ink

Procedia PDF Downloads 130
171 From Sound to Music: The Trajectory of Musical Semiotics in a Selected Soundscape Environment in South-Western Nigeria

Authors: Olatunbosun Samuel Adekogbe

Abstract:

This paper addresses the question of musical signification, revolving around nature and its natural divides; the paper tends to examine the roles of the dispositional apparatus of listeners to react to sounding environments through music as coordinated sound that focuses on the powerful strain between vibrational occurrences of sound and potentials of being structured. This paper sets out to examine music as a simple conventional design that does not allude to something beyond music and sound as a vehicle to communicate through production, perception, translation, and reaction with regard to melodic and semiotic functions of sounds. This paper adopts the application of questionnaire and evolutionary approach methods to probe musical adaptation, reproduction, and natural selection as the basis for explaining specific human behavioural responses to musical sense-making beyond the above-sketched dichotomies, with a major focus on the transition from acoustic-emotional sensibilities to musical meaning in the selected soundscapes. It was observed that music has emancipated itself from the level of mere acoustic processing of sounds to a functional description in terms of allowing music users to share experiences and interact with the soundscaping environment. The paper, therefore, concludes that the audience as music participants and listeners in the selected soundscapes have been conceived as adaptive devices in the paradigm shift, which can build up new semiotic linkages with the sounding environments in southwestern Nigeria.

Keywords: semiotics, sound, music, soundscape, environment

Procedia PDF Downloads 53
170 Optimization of a Hand-Fan Shaped Microstrip Patch Antenna by Means of Orthogonal Design Method of Design of Experiments for L-Band and S-Band Applications

Authors: Jaswinder Kaur, Nitika, Navneet Kaur, Rajesh Khanna

Abstract:

A hand-fan shaped microstrip patch antenna (MPA) for L-band and S-band applications is designed, and its characteristics have been reconnoitered. The proposed microstrip patch antenna with double U-slot defected ground structure (DGS) is fabricated on an FR4 substrate which is a very readily available and inexpensive material. The suggested antenna is optimized using Orthogonal Design Method (ODM) of Design of Experiments (DOE) to cover the frequency range from 0.91-2.82 GHz for L-band and S-band applications. The L-band covers the frequency range of 1-2 GHz, which is allocated to telemetry, aeronautical, and military systems for passive satellite sensors, weather radars, radio astronomy, and mobile communication. The S-band covers the frequency range of 2-3 GHz, which is used by weather radars, surface ship radars and communication satellites and is also reserved for various wireless applications such as Worldwide Interoperability for Microwave Access (Wi-MAX), super high frequency radio frequency identification (SHF RFID), industrial, scientific and medical bands (ISM), Bluetooth, wireless broadband (Wi-Bro) and wireless local area network (WLAN). The proposed method of optimization is very time efficient and accurate as compared to the conventional evolutionary algorithms due to its statistical strategy. Moreover, the antenna is tested, followed by the comparison of simulated and measured results.

Keywords: design of experiments, hand fan shaped MPA, L-Band, orthogonal design method, S-Band

Procedia PDF Downloads 121
169 Signal Processing Techniques for Adaptive Beamforming with Robustness

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.

Keywords: adaptive beamforming, robustness, signal blocking, steering angle error

Procedia PDF Downloads 114
168 Parental Diet Effects on Offspring Body Size and Pathogen Resistance in Bactrocera tryoni

Authors: Hue Dinh, Binh Nguyen, Vivian Mendez, Phillip W. Taylor, Fleur Ponton

Abstract:

Better understanding of how parental diet affects offspring traits is an important ecological and evolutionary question. In this study, we explored how maternal diet influences offspring physiology and resistance to infection using Bactrocera tryoni (Q-fly) as a system model. Female Q-flies were fed one of six single diets varying in their yeast-to-sugar ratio yielding six protein-to-carbohydrate ratios. As controls, we used females that were given a choice between yeast and sugar. Males were reared on a choice diet and allowed to mate with females 14 days post-emergence. Results showed that while maternal diet does not influence offspring developmental time, it has a strong effect on larval body weight. Mother fed either high-protein or high-sugar diet produced larger progeny. By challenging offspring with the bacterium Serratia marcescens, we found that female offspring from mothers fed high-sugar diet survived better the infection compared to those from mothers fed low-sugar diet. In contrast, male offspring produced by mother fed high-protein diet showed better resistance to the infection compared to those produced by mother fed low-protein diet. These results suggested sex-dependent transgenerational effects of maternal nutrition on offspring physiology and immunity.

Keywords: bacterial infection, Bactrocera tryoni, maternal diet, offspring, Serretia marcescens

Procedia PDF Downloads 129
167 Optimal Design of Linear Generator to Recharge the Smartphone Battery

Authors: Jin Ho Kim, Yujeong Shin, Seong-Jin Cho, Dong-Jin Kim, U-Syn Ha

Abstract:

Due to the development of the information industry and technologies, cellular phones have must not only function to communicate, but also have functions such as the Internet, e-banking, entertainment, etc. These phones are called smartphones. The performance of smartphones has improved, because of the various functions of smartphones, and the capacity of the battery has been increased gradually. Recently, linear generators have been embedded in smartphones in order to recharge the smartphone's battery. In this study, optimization is performed and an array change of permanent magnets is examined in order to increase efficiency. We propose an optimal design using design of experiments (DOE) to maximize the generated induced voltage. The thickness of the poleshoe and permanent magnet (PM), the height of the poleshoe and PM, and the thickness of the coil are determined to be design variables. We made 25 sampling points using an orthogonal array according to four design variables. We performed electromagnetic finite element analysis to predict the generated induced voltage using the commercial electromagnetic analysis software ANSYS Maxwell. Then, we made an approximate model using the Kriging algorithm, and derived optimal values of the design variables using an evolutionary algorithm. The commercial optimization software PIAnO (Process Integration, Automation, and Optimization) was used with these algorithms. The result of the optimization shows that the generated induced voltage is improved.

Keywords: smartphone, linear generator, design of experiment, approximate model, optimal design

Procedia PDF Downloads 337
166 Geometric Morphometric Analysis of Allometric Variation in the Hand Morphology of Adults

Authors: Aleksandr S. Ermolenko

Abstract:

Allometry is an important factor of morphological integration, contributing to the organization of the phenotype and its variability. The allometric change in the shape of the hand is particularly important in primate evolution, as the hand has important taxonomic features. Some of these features are known to parts with the shape, especially the ratio of the lengths of the index and ring fingers (2d: 4d ratio). The hand is a fairly well-studied system in the context of the evolutionary development of complex morphological structures since it consists of various departments (basipodium, metapodium, acropodium) that form a single structure –autopodium. In the present study, we examined the allometric variability of acropodium. We tested the null hypothesis that there would be no difference in allometric variation between the two components. Geometric morphometry based on a procrustation of 16 two-dimensional (2D) landmarks was analyzed using multivariate shape-by-size regressions in samples from 100 people (50 men and 50 women). The results obtained show that men have significantly greater allometric variability for the ring finger (variability in the transverse axis prevails), while women have significantly greater allometric variability for the index finger (variability in the longitudinal axis prevails). The influence of the middle finger on the shape of the hand is typical for both men and women. The influence of the little finger on the shape of the hand, regardless of gender, was not revealed. The results of this study support the hypothesis that allometry contributes to the organization of variation in the human hand.

Keywords: human hand, size and shape, 2d:4d ratio, geometric morphometry

Procedia PDF Downloads 144
165 A Postcolonial View Analysis on the Structural Rationalism Influence in Indonesian Modern Architecture

Authors: Ryadi Adityavarman

Abstract:

The study is an analysis by using the postcolonial theoretical lens on the search for a distinctive architectural identity by architect Maclaine Pont in Indonesia in the early twentieth century. Influenced by progressive architectural thinking and enlightened humanism at the time, Pont applied the fundamental principles of Structural Rationalism by using a creative combination of traditional Indonesian architectural typology and innovative structural application. The interpretive design strategy also celebrated creative use of local building materials with sensible tropical climate design response. Moreover, his holistic architectural scheme, including inclusion of local custom of building construction, represents the notion of Gesamkunstwerk. By using such hybrid strategy, Maclaine Pont intended to preserve the essential cultural identity and vernacular architecture of the indigenous. The study will chronologically investigate the evolution of Structural Rationalism architecture philosophy of Viollet-le-Duc to Hendrik Berlage’s influential design thinking in the Dutch modern architecture, and subsequently to the Maclaine Pont’s innovative design in Indonesia. Consequently, the morphology analysis on his exemplary design works of ITB campus (1923) and Pohsarang Church (1936) is to understand the evolutionary influence of Structural Rationalism theory. The postmodern analysis method is to highlight the validity of Pont’s idea in the contemporary Indonesian architecture within the culture of globalism era.

Keywords: Indonesian modern architecture, postcolonial, structural rationalism, critical regionalism

Procedia PDF Downloads 319
164 Structural Analysis of Phase Transformation and Particle Formation in Metastable Metallic Thin Films Grown by Plasma-Enhanced Atomic Layer Deposition

Authors: Pouyan Motamedi, Ken Bosnick, Ken Cadien, James Hogan

Abstract:

Growth of conformal ultrathin metal films has attracted a considerable amount of attention recently. Plasma-enhanced atomic layer deposition (PEALD) is a method capable of growing conformal thin films at low temperatures, with an exemplary control over thickness. The authors have recently reported on growth of metastable epitaxial nickel thin films via PEALD, along with a comprehensive characterization of the films and a study on the relationship between the growth parameters and the film characteristics. The goal of the current study is to use the mentioned films as a case study to investigate the temperature-activated phase transformation and agglomeration in ultrathin metallic films. For this purpose, metastable hexagonal nickel thin films were annealed using a controlled heating/cooling apparatus. The transformations in the crystal structure were observed via in-situ synchrotron x-ray diffraction. The samples were annealed to various temperatures in the range of 400-1100° C. The onset and progression of particle formation were studied in-situ via laser measurements. In addition, a four-point probe measurement tool was used to record the changes in the resistivity of the films, which is affected by phase transformation, as well as roughening and agglomeration. Thin films annealed at various temperature steps were then studied via atomic force microscopy, scanning electron microscopy and high-resolution transmission electron microscopy, in order to get a better understanding of the correlated mechanisms, through which phase transformation and particle formation occur. The results indicate that the onset of hcp-to-bcc transformation is at 400°C, while particle formations commences at 590° C. If the annealed films are quenched after transformation, but prior to agglomeration, they show a noticeable drop in resistivity. This can be attributed to the fact that the hcp films are grown epitaxially, and are under severe tensile strain, and annealing leads to relaxation of the mismatch strain. In general, the results shed light on the nature of structural transformation in nickel thin films, as well as metallic thin films, in general.

Keywords: atomic layer deposition, metastable, nickel, phase transformation, thin film

Procedia PDF Downloads 319
163 Emerging Technologies for Learning: In Need of a Pro-Active Educational Strategy

Authors: Pieter De Vries, Renate Klaassen, Maria Ioannides

Abstract:

This paper is about an explorative research into the use of emerging technologies for teaching and learning in higher engineering education. The assumption is that these technologies and applications, which are not yet widely adopted, will help to improve education and as such actively work on the ability to better deal with the mismatch of skills bothering our industries. Technologies such as 3D printing, the Internet of Things, Virtual Reality, and others, are in a dynamic state of development which makes it difficult to grasp the value for education. Also, the instruments in current educational research seem not appropriate to assess the value of such technologies. This explorative research aims to foster an approach to better deal with this new complexity. The need to find out is urgent, because these technologies will be dominantly present in the near future in all aspects of life, including education. The methodology used in this research comprised an inventory of emerging technologies and tools that potentially give way to innovation and are used or about to be used in technical universities. The inventory was based on both a literature review and a review of reports and web resources like blogs and others and included a series of interviews with stakeholders in engineering education and at representative industries. In addition, a number of small experiments were executed with the aim to analyze the requirements for the use of in this case Virtual Reality and the Internet of Things to better understanding the opportunities and limitations in the day-today learning environment. The major findings indicate that it is rather difficult to decide about the value of these technologies for education due to the dynamic state of change and therefor unpredictability and the lack of a coherent policy at the institutions. Most decisions are being made by teachers on an individual basis, who in their micro-environment are not equipped to select, test and ultimately decide about the use of these technologies. Most experiences are being made in the industry knowing that the skills to handle these technologies are in high demand. The industry though is worried about the inclination and the capability of education to help bridge the skills gap related to the emergence of new technologies. Due to the complexity, the diversity, the speed of development and the decay, education is challenged to develop an approach that can make these technologies work in an integrated fashion. For education to fully profit from the opportunities, these technologies offer it is eminent to develop a pro-active strategy and a sustainable approach to frame the emerging technologies development.

Keywords: emerging technologies, internet of things, pro-active strategy, virtual reality

Procedia PDF Downloads 179