Search results for: current Omani architecture type
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16052

Search results for: current Omani architecture type

15032 Detecting Port Maritime Communities in Spain with Complex Network Analysis

Authors: Nicanor Garcia Alvarez, Belarmino Adenso-Diaz, Laura Calzada Infante

Abstract:

In recent years, researchers have shown an interest in modelling maritime traffic as a complex network. In this paper, we propose a bipartite weighted network to model maritime traffic and detect port maritime communities. The bipartite weighted network considers two different types of nodes. The first one represents Spanish ports, while the second one represents the countries with which there is major import/export activity. The flow among both types of nodes is modeled by weighting the volume of product transported. To illustrate the model, the data is segmented by each type of traffic. This will allow fine tuning and the creation of communities for each type of traffic and therefore finding similar ports for a specific type of traffic, which will provide decision-makers with tools to search for alliances or identify their competitors. The traffic with the greatest impact on the Spanish gross domestic product is selected, and the evolution of the communities formed by the most important ports and their differences between 2019 and 2009 will be analyzed. Finally, the set of communities formed by the ports of the Spanish port system will be inspected to determine global similarities between them, analyzing the sum of the membership of the different ports in communities formed for each type of traffic in particular.

Keywords: bipartite networks, competition, infomap, maritime traffic, port communities

Procedia PDF Downloads 133
15031 Unit Root Tests Based On the Robust Estimator

Authors: Wararit Panichkitkosolkul

Abstract:

The unit root tests based on the robust estimator for the first-order autoregressive process are proposed and compared with the unit root tests based on the ordinary least squares (OLS) estimator. The percentiles of the null distributions of the unit root test are also reported. The empirical probabilities of Type I error and powers of the unit root tests are estimated via Monte Carlo simulation. Simulation results show that all unit root tests can control the probability of Type I error for all situations. The empirical power of the unit root tests based on the robust estimator are higher than the unit root tests based on the OLS estimator.

Keywords: autoregressive, ordinary least squares, type i error, power of the test, Monte Carlo simulation

Procedia PDF Downloads 271
15030 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: carry save adder Karatsuba multiplication, mid range Karatsuba multiplication, modified FFA and transposed filter, retiming

Procedia PDF Downloads 217
15029 Healthy Architecture Applied to Inclusive Design for People with Cognitive Disabilities

Authors: Santiago Quesada-García, María Lozano-Gómez, Pablo Valero-Flores

Abstract:

The recent digital revolution, together with modern technologies, is changing the environment and the way people interact with inhabited space. However, in society, the elderly are a very broad and varied group that presents serious difficulties in understanding these modern technologies. Outpatients with cognitive disabilities, such as those suffering from Alzheimer's disease (AD), are distinguished within this cluster. This population group is in constant growth, and they have specific requirements for their inhabited space. According to architecture, which is one of the health humanities, environments are designed to promote well-being and improve the quality of life for all. Buildings, as well as the tools and technologies integrated into them, must be accessible, inclusive, and foster health. In this new digital paradigm, artificial intelligence (AI) appears as an innovative resource to help this population group improve their autonomy and quality of life. Some experiences and solutions, such as those that interact with users through chatbots and voicebots, show the potential of AI in its practical application. In the design of healthy spaces, the integration of AI in architecture will allow the living environment to become a kind of 'exo-brain' that can make up for certain cognitive deficiencies in this population. The objective of this paper is to address, from the discipline of neuroarchitecture, how modern technologies can be integrated into everyday environments and be an accessible resource for people with cognitive disabilities. For this, the methodology has a mixed structure. On the one hand, from an empirical point of view, the research carries out a review of the existing literature about the applications of AI to build space, following the critical review foundations. As a unconventional architectural research, an experimental analysis is proposed based on people with AD as a resource of data to study how the environment in which they live influences their regular activities. The results presented in this communication are part of the progress achieved in the competitive R&D&I project ALZARQ (PID2020-115790RB-I00). These outcomes are aimed at the specific needs of people with cognitive disabilities, especially those with AD, since, due to the comfort and wellness that the solutions entail, they can also be extrapolated to the whole society. As a provisional conclusion, it can be stated that, in the immediate future, AI will be an essential element in the design and construction of healthy new environments. The discipline of architecture has the compositional resources to, through this emerging technology, build an 'exo-brain' capable of becoming a personal assistant for the inhabitants, with whom to interact proactively and contribute to their general well-being. The main objective of this work is to show how this is possible.

Keywords: Alzheimer’s disease, artificial intelligence, healthy architecture, neuroarchitecture, architectural design

Procedia PDF Downloads 40
15028 Effects of Computer-Mediated Dictionaries on Reading Comprehension and Vocabulary Acquisition

Authors: Mohamed Amin Mekheimer

Abstract:

This study aimed to investigate the effects of paper-based monolingual, pop-up and type-in electronic dictionaries on improving reading comprehension and incidental vocabulary acquisition and retention in an EFL context. It tapped into how computer-mediated dictionaries may have facilitated/impeded reading comprehension and vocabulary acquisition. Findings showed differential effects produced by the three treatments compared with the control group. Specifically, it revealed that the pop-up dictionary condition had the shortest average vocabulary searching time, vocabulary and text reading time, yet with less than the type-in dictionary group but more than the book dictionary group in terms of frequent dictionary 'look-ups' (p<.0001). In addition, ANOVA analyses also showed that text reading time differed significantly across all four treatments, and so did reading comprehension. Vocabulary acquisition was reported as enhanced in the three treatments rather than in the control group, but still with insignificant differences across the three treatments, yet with more differential effects in favour of the pop-up condition. Data also assert that participants preferred the pop-up e-dictionary more than the type-in and paper-based groups. Explanations of the findings vis-à-vis the cognitive load theory were presented. Pedagogical implications and suggestions for further research were forwarded at the end.

Keywords: computer-mediated dictionaries, type-in dictionaries, pop-up dictionaries, reading comprehension, vocabulary acquisition

Procedia PDF Downloads 414
15027 Bio-Inspired Design Approach Analysis: A Case Study of Antoni Gaudi and Santiago Calatrava

Authors: Marzieh Imani

Abstract:

Antoni Gaudi and Santiago Calatrava have reputation for designing bio-inspired creative and technical buildings. Even though they have followed different independent approaches towards design, the source of bio-inspiration seems to be common. Taking a closer look at their projects reveals that Calatrava has been influenced by Gaudi in terms of interpreting nature and applying natural principles into the design process. This research firstly discusses the dialogue between Biomimicry and architecture. This review also explores human/nature discourse during the history by focusing on how nature revealed itself to the fine arts. This is explained by introducing naturalism and romantic style in architecture as the outcome of designers’ inclination towards nature. Reviewing the literature, theoretical background and practical illustration of nature have been included. The most dominant practical aspects of imitating nature are form and function. Nature has been reflected in architectural science resulted in shaping different architectural styles such as organic, green, sustainable, bionic, and biomorphic. By defining a set of common aspects of Gaudi and Calatrava‘s design approach and by considering biomimetic design categories (organism, ecosystem, and behaviour as the main division and form, function, process, material, and construction as subdivisions), Gaudi’s and Calatrava’s project have been analysed. This analysis explores if their design approaches are equivalent or different. Based on this analysis, Gaudi’s architecture can be recognised as biomorphic while Calatrava’s projects are literally biomimetic. Referring to these architects, this review suggests a new set of principles by which a bio-inspired project can be determined either biomorphic or biomimetic.

Keywords: biomimicry, Calatrava, Gaudi, nature

Procedia PDF Downloads 269
15026 A Systematic Review of Process Research in Software Engineering

Authors: Tulasi Rayasa, Phani Kumar Pullela

Abstract:

A systematic review is a research method that involves collecting and evaluating the information on a specific topic in order to provide a comprehensive and unbiased review. This type of review aims to improve the software development process by ensuring that the research is thorough and accurate. To ensure objectivity, it is important to follow systematic guidelines and consider multiple sources, such as literature reviews, interviews, and surveys. The evaluation process should also be streamlined by incorporating research from journals and other sources, such as grey literature. The main goal of a systematic review is to identify the consistency of current models in the field of computer application and software engineering.

Keywords: computer application, software engineering, process research, data science

Procedia PDF Downloads 77
15025 The Effectiveness of High-Frequency Repetitive Transcranial Magnetic Stimulation in Persistent Somatic Symptoms Disorder: A Case Report Study

Authors: Mohammed Khamis Albalushi

Abstract:

Background: Somatic symptoms disorders are usually comorbid with depressive disorders despite the fact that there is little evidence for effective treatment for it. Repetitive transcranial magnetic stimulation (rTMS) has been approved by the FDA for mildly resistant depression. From this point, we hypothesized that rTMS delivered over the prefrontal cortex (PFC) may be useful in somatic symptoms disorder. Therefore, in our case report, we want to shed light on the potential effectiveness of rTMS in somatic symptoms disorder. Case Report: A 65-year-old Omani female with multiple medical comorbidities on multiple medications. She presented complaining of multiple somatic complaints in the last 2 years after visiting multiple clinics and underwent several specialists’ examinations, investigations and procedures for somatic treatments; all of them were normal. Then patient was seen by a different psychiatric clinic; multiple anti-depressant and adjuvant anti-psychotic medications were tried, patient still did not improve. The patient was admitted to the hospital for observation and management. Initially, she was preoccupied with her somatic complaint and kept on Fluoxetine and Olanzapine along with that, topiramate was added, but still with minimal improvement. Then rTMS was added to her management plan following Intermittent theta burst (iTBS) rTMS protocol. After completing all sessions of rTMS, the patient was recovering from all her symptoms, and no complaints were reported from her. Conclusion: Our case highlights the importance of investigating more thoroughly in rTMS as a treatment option for Persistent Somatic symptoms Disorder.

Keywords: rTMS, somatic symptoms disorder, resistive cases, TMS

Procedia PDF Downloads 47
15024 Platform-as-a-Service Sticky Policies for Privacy Classification in the Cloud

Authors: Maha Shamseddine, Amjad Nusayr, Wassim Itani

Abstract:

In this paper, we present a Platform-as-a-Service (PaaS) model for controlling the privacy enforcement mechanisms applied on user data when stored and processed in Cloud data centers. The proposed architecture consists of establishing user configurable ‘sticky’ policies on the Graphical User Interface (GUI) data-bound components during the application development phase to specify the details of privacy enforcement on the contents of these components. Various privacy classification classes on the data components are formally defined to give the user full control on the degree and scope of privacy enforcement including the type of execution containers to process the data in the Cloud. This not only enhances the privacy-awareness of the developed Cloud services, but also results in major savings in performance and energy efficiency due to the fact that the privacy mechanisms are solely applied on sensitive data units and not on all the user content. The proposed design is implemented in a real PaaS cloud computing environment on the Microsoft Azure platform.

Keywords: privacy enforcement, platform-as-a-service privacy awareness, cloud computing privacy

Procedia PDF Downloads 204
15023 Analysis of Autoantibodies to the S-100 Protein, NMDA, and Dopamine Receptors in Children with Type 1 Diabetes Mellitus

Authors: Yuri V. Bykov, V. A. Baturin

Abstract:

Aim of the study: The aim of the study was to perform a comparative analysis of the levels of autoantibodies (AAB) to the S-100 protein as well as to the dopamine and NMDA receptors in children with type 1 diabetes mellitus (DM) in therapeutic remission. Materials and methods: Blood serum obtained from 42 children ages 4 to 17 years (20 boys and 22 girls) was analyzed. Twenty-one of these children had a diagnosis of type 1 DM and were in therapeutic remission (study group). The mean duration of disease in children with type 1 DM was 9.6±0.36 years. Children without DM were included in a group of "apparently healthy children" (21 children, comparison group). AAB to the S-100 protein, the dopamine, and NMDA receptors were measured by ELISA. The normal range of IgG AAB was specified as up to 10 µg/mL. In order to compare the central parameters of the groups, the following parametric and non-parametric methods were used: Student's t-test or Mann-Whitney U test. The level of significance for inter-group comparisons was set at p<0.05. Results: The mean levels of AAB to the S-100B protein were significantly higher (p=0.0045) in children with DM (16.84±1.54 µg/mL) when compared with "apparently healthy children" (2.09±0.05 µg/mL). The detected elevated levels of AAB to NMDA receptors may indicate that in children with type 1 DM, there is a change in the activity of the glutamatergic system, which in its turn suggests the presence of excitotoxicity. The mean levels of AAB to dopamine receptors were higher (p=0.0082) in patients comprising the study group than in the children of the comparison group (40.47±2.31 µg/mL and 3.91±0.09 µg/mL). The detected elevated levels of AAB to dopamine receptors suggest an altered activity of the dopaminergic system in children with DM. This can also be viewed as indirect evidence of altered activity of the brain's glutamatergic system. The mean levels of AAB to NMDA receptors were higher in patients with type 1 DM compared with the "apparently healthy children," at 13.16±2.07 µg/mL and 1.304±0.05 µg/mL, respectively (p=0.0021). The elevated mean levels of AAB to the S-100B protein may indicate damage to brain tissue in children with type 1 DM. A difference was also detected between the mean values of the measured AABs, and this difference depended on the duration of the disease: mean AAB values were significantly higher in patients whose disease had lasted more than five years. Conclusions: The elevated mean levels of AAB to the S-100B protein may indicate damage to brain tissue in the setting of excitotoxicity in children with type 1 DM. The discovered elevation of the levels of AAB to NMDA and dopamine receptors may indicate the activation of the glutamatergic and dopaminergic systems. The observed abnormalities indicate the presence of central nervous system damage in children with type 1 DM, with a tendency towards the elevation of the levels of the studied AABs with disease progression.

Keywords: autoantibodies, brain damage, children, diabetes mellitus

Procedia PDF Downloads 76
15022 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids

Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout

Abstract:

Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.

Keywords: graphene, layered material, field emission, plasma, doping

Procedia PDF Downloads 346
15021 The Scanning Vibrating Electrode Technique (SVET) as a Tool for Optimising a Printed Ni(OH)2 Electrode under Charge Conditions

Authors: C. F. Glover, J. Marinaccio, A. Barnes, I. Mabbett, G. Williams

Abstract:

The aim of the current study is to optimise formulations, in terms of charging efficiency, of a printed Ni(OH)2 precursor coating of a battery anode. Through the assessment of the current densities during charging, the efficiency of a range of formulations are compared. The Scanning vibrating electrode technique (SVET) is used extensively in the field of corrosion to measure area-averaged current densities of freely-corroding metal surfaces when fully immersed in electrolyte. Here, a Ni(OH)2 electrode is immersed in potassium hydroxide (30% w/v solution) electrolyte and charged using a range of applied currents. Samples are prepared whereby multiple coatings are applied to one substrate, separated by a non-conducting barrier, and charged using a constant current. With a known applied external current, electrode efficiencies can be calculated based on the current density outputs measured using SVET. When fully charged, a green Ni(OH)2 is oxidised to a black NiOOH surface. Distinct regions displaying high current density, and hence a faster oxidising reaction rate, are located using the SVET. This is confirmed by a darkening of the region upon transition to NiOOH. SVET is a highly effective tool for assessing homogeneity of electrodes during charge/discharge. This could prove particularly useful for electrodes where there are no visible surface appearance changes. Furthermore, a scanning Kelvin probe technique, traditionally used to assess underfilm delamination of organic coatings for the protection of metallic surfaces, is employed to study the change in phase of oxides, pre and post charging.

Keywords: battery, electrode, nickel hydroxide, SVET, printed

Procedia PDF Downloads 219
15020 An Investigation of E-Government by Using GIS and Establishing E-Government in Developing Countries Case Study: Iraq

Authors: Ahmed M. Jamel

Abstract:

Electronic government initiatives and public participation to them are among the indicators of today's development criteria of the countries. After consequent two wars, Iraq's current position in, for example, UN's e-government ranking is quite concerning and did not improve in recent years, either. In the preparation of this work, we are motivated with the fact that handling geographic data of the public facilities and resources are needed in most of the e-government projects. Geographical information systems (GIS) provide most common tools not only to manage spatial data but also to integrate such type of data with nonspatial attributes of the features. With this background, this paper proposes that establishing a working GIS in the health sector of Iraq would improve e-government applications. As the case study, investigating hospital locations in Erbil is chosen.

Keywords: e-government, GIS, Iraq, Erbil

Procedia PDF Downloads 370
15019 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 78
15018 Intrusion Detection in Cloud Computing Using Machine Learning

Authors: Faiza Babur Khan, Sohail Asghar

Abstract:

With an emergence of distributed environment, cloud computing is proving to be the most stimulating computing paradigm shift in computer technology, resulting in spectacular expansion in IT industry. Many companies have augmented their technical infrastructure by adopting cloud resource sharing architecture. Cloud computing has opened doors to unlimited opportunities from application to platform availability, expandable storage and provision of computing environment. However, from a security viewpoint, an added risk level is introduced from clouds, weakening the protection mechanisms, and hardening the availability of privacy, data security and on demand service. Issues of trust, confidentiality, and integrity are elevated due to multitenant resource sharing architecture of cloud. Trust or reliability of cloud refers to its capability of providing the needed services precisely and unfailingly. Confidentiality is the ability of the architecture to ensure authorization of the relevant party to access its private data. It also guarantees integrity to protect the data from being fabricated by an unauthorized user. So in order to assure provision of secured cloud, a roadmap or model is obligatory to analyze a security problem, design mitigation strategies, and evaluate solutions. The aim of the paper is twofold; first to enlighten the factors which make cloud security critical along with alleviation strategies and secondly to propose an intrusion detection model that identifies the attackers in a preventive way using machine learning Random Forest classifier with an accuracy of 99.8%. This model uses less number of features. A comparison with other classifiers is also presented.

Keywords: cloud security, threats, machine learning, random forest, classification

Procedia PDF Downloads 304
15017 A Three-Step Iterative Process for Common Fixed Points of Three Contractive-Like Operators

Authors: Safeer Hussain Khan, H. Fukhar-ud-Din

Abstract:

The concept of quasi-contractive type operators was given by Berinde and extended by Imoru and Olatinwo. They named this new type as contractive-like operators. On the other hand, Xu and Noo introduced a three-step-one-mappings iterative process which can be seen as a generalization of Mann and Ishikawa iterative processes. Approximating common fixed points has its own importance as it has a direct link with minimization problem. Motivated by this, in this paper, we first extend the iterative process of Xu and Noor to the case of three-step-three-mappings and then prove a strong convergence result using contractive-like operators for this iterative process. In general, this generalizes corresponding results using Mann, Ishikawa and Xu-Noor iterative processes with quasi-contractive type operators. It is to be pointed out that our results can also be proved with iterative process involving error terms.

Keywords: contractive-like operator, iterative process, common fixed point, strong convergence

Procedia PDF Downloads 573
15016 Etude 3D Quantum Numerical Simulation of Performance in the HEMT

Authors: A. Boursali, A. Guen-Bouazza

Abstract:

We present a simulation of a HEMT (high electron mobility transistor) structure with and without a field plate. We extract the device characteristics through the analysis of DC, AC and high frequency regimes, as shown in this paper. This work demonstrates the optimal device with a gate length of 15 nm, InAlN/GaN heterostructure and field plate structure, making it superior to modern HEMTs when compared with otherwise equivalent devices. This improves the ability to bear the burden of the current density passes in the channel. We have demonstrated an excellent current density, as high as 2.05 A/m, a peak extrinsic transconductance of 0.59S/m at VDS=2 V, and cutting frequency cutoffs of 638 GHz in the first HEMT and 463 GHz for Field plate HEMT., maximum frequency of 1.7 THz, maximum efficiency of 73%, maximum breakdown voltage of 400 V, leakage current density IFuite=1 x 10-26 A, DIBL=33.52 mV/V and an ON/OFF current density ratio higher than 1 x 1010. These values were determined through the simulation by deriving genetic and Monte Carlo algorithms that optimize the design and the future of this technology.

Keywords: HEMT, silvaco, field plate, genetic algorithm, quantum

Procedia PDF Downloads 332
15015 Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces

Authors: Shweta Singh, Sudaman Katti

Abstract:

The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning.

Keywords: convolutional neural networks, reinforcement learning, self-attention, transformers, unity

Procedia PDF Downloads 105
15014 Beneficial Effect of Chromium Supplementation on Glucose, HbA1C and Lipid Variables in Individuals with Newly Onset Type-2 Diabetes

Authors: Baljinder Singh, Navneet Sharma

Abstract:

Chromium is an essential nutrient involved in normal carbohydrate and lipid metabolism. It influences glucose metabolism by potentiating the action as taking part in insulin signal amplification mechanism. A placebo-controlled single blind, prospective study was carried out to investigate the effect of chromium supplementation on blood glucose, HbA1C and lipid profile in newly onset patients with type-2 diabetes. Total 40 newly onset type-2 diabetics were selected and after one month stabilization further randomly divided into two groups viz. study group and placebo group. The study group received 9 gm brewer’s yeast (42 μ Cr) daily and the other placebo group received yeast devoid of chromium for 3 months. Subjects were instructed not to change their normal eating and living habits. Fasting blood glucose, HbA1C and lipid profile were analyzed at beginning and completion of the study. Results revealed that fasting blood glucose level significantly reduced in the subjects consuming yeast supplemented with chromium (197.65±6.68 to 103.68±6.64 mg/dl; p<0.001). HbA1C values improved significantly from 9.51±0.26% to 6.86±0.28%; p<0.001 indicating better glycaemic control. In experimental group total cholesterol, TG and LDL levels were also significantly reduced from 199.66±3.11 to 189.26±3.01 mg/dl; p<0.02, 144.94±8.31 to 126.01±8.26; p<0.05 and 119.19±1.71 to 99.58±1.10; p<0.001 respectively. These data demonstrate beneficial effect of chromium supplementation on glycaemic control and lipid variables in subjects with newly onset type-2 diabetes.

Keywords: type-2 diabetes, chromium, glucose, HbA1C

Procedia PDF Downloads 222
15013 Producing Carbon Nanoparticles from Agricultural and Municipal Wastes

Authors: Kanik Sharma

Abstract:

In the year of 2011, the global production of carbon nano-materials (CNMs) was around 3,500 tons, and it is projected to expand at a compound annual growth rate of 30.6%. Expanding markets for applications of CNMs, such as carbon nano-tubes (CNTs) and carbon nano-fibers (CNFs), place ever-increasing demands on lowering their production costs. Current technologies for CNM generation require intensive premium feedstock consumption and employ costly catalysts; they also require input of external energy. Industrial-scale CNM production is conventionally achieved through chemical vapor deposition (CVD) methods which consume a variety of expensive premium chemical feedstocks such as ethylene, carbon monoxide (CO) and hydrogen (H2); or by flame synthesis techniques, which also consume premium feedstock fuels. Additionally, CVD methods are energy-intensive. Renewable and replenishable feedstocks, such as those found in municipal, industrial, agricultural recycling streams have a more judicious reason for usage, in the light of current emerging needs for sustainability. Agricultural sugarcane bagasse and corn residues, scrap tire chips as well as post-consumer polyethylene (PE) and polyethylene terephthalate (PET) bottle shreddings when either thermally treated by sole pyrolysis or by sequential pyrolysis and partial oxidation result in the formation of gaseous carbon-bearing effluents which when channeled into a heated reactor, produce CNMs, including carbon nano-tubes, catalytically synthesized therein on stainless steel meshes. The structure of the nano-material synthesized depends on the type of feedstock available for pyrolysis, and can be determined by analysing the feedstock. These feedstocks could supersede the use of costly and often toxic or highly-flammable chemicals such as hydrocarbon gases, carbon monoxide and hydrogen, which are commonly used as feedstocks in current nano-manufacturing process for CNMs.

Keywords: nanomaterials, waste plastics, sugarcane bagasse, pyrolysis

Procedia PDF Downloads 212
15012 Managerial Encouragement, Organizational Encouragement, and Resource Sufficiency and Its Effect on Creativity as Perceived by Architects in Metro Manila

Authors: Ferdinand de la Paz

Abstract:

In highly creative environments such as in the business of architecture, business models exhibit more focus on the traditional practice of mainstream design consultancy services as mandated and constrained by existing legislation. Architectural design firms, as business units belonging to the creative industries, have long been provoked to innovate not only in terms of their creative outputs but, more significantly, in the way they create and capture value from what they do. In the Philippines, there is still a dearth of studies exploring organizational creativity within the context of architectural firm practice, let alone across other creative industries. The study sought to determine the effects, measure the extent, and assess the relationships of managerial encouragement, organizational encouragement, and resource sufficiency on creativity as perceived by architects. A survey questionnaire was used to gather data from 100 respondents. The analysis was done using descriptive statistics, correlational, and causal-explanatory methods. The findings reveal that there is a weak positive relationship between Managerial Encouragement (ME), Organizational Encouragement (OE), and Sufficient Resources (SR) toward Creativity (C). The study also revealed that while Organizational Creativity and Sufficient Resources have significant effects on Creativity, Managerial Encouragement does not. It is recommended that future studies with a larger sample size be pursued among architects holding top management positions in architectural design firms to further validate the findings of this research. It is also highly recommended that the other stimulant scales in the KEYS framework be considered in future studies covering other locales to generate a better understanding of the architecture business landscape in the Philippines.

Keywords: managerial encouragement, organizational encouragement, resource sufficiency, organizational creativity, architecture firm practice, creative industries

Procedia PDF Downloads 72
15011 Lithium and Sodium Ion Capacitors with High Energy and Power Densities based on Carbons from Recycled Olive Pits

Authors: Jon Ajuria, Edurne Redondo, Roman Mysyk, Eider Goikolea

Abstract:

Hybrid capacitor configurations are now of increasing interest to overcome the current energy limitations of supercapacitors entirely based on non-Faradaic charge storage. Among them, Li-ion capacitors including a negative battery-type lithium intercalation electrode and a positive capacitor-type electrode have achieved tremendous progress and have gone up to commercialization. Inexpensive electrode materials from renewable sources have recently received increased attention since cost is a persistently major criterion to make supercapacitors a more viable energy solution, with electrode materials being a major contributor to supercapacitor cost. Additionally, Na-ion battery chemistries are currently under development as less expensive and accessible alternative to Li-ion based battery electrodes. In this work, we are presenting both lithium and sodium ion capacitor (LIC & NIC) entirely based on electrodes prepared from carbon materials derived from recycled olive pits. Yearly, around 1 million ton of olive pit waste is generated worldwide, of which a third originates in the Spanish olive oil industry. On the one hand, olive pits were pyrolized at different temperatures to obtain a low specific surface area semigraphitic hard carbon to be used as the Li/Na ion intercalation (battery-type) negative electrode. The best hard carbon delivers a total capacity of 270mAh/g vs Na/Na+ in 1M NaPF6 and 350mAh/g vs Li/Li+ in 1M LiPF6. On the other hand, the same hard carbon is chemically activated with KOH to obtain high specific surface area -about 2000 m2g-1- activated carbon that is further used as the ion-adsorption (capacitor-type) positive electrode. In a voltage window of 1.5-4.2V, activated carbon delivers a specific capacity of 80 mAh/g vs. Na/Na+ and 95 mAh/g vs. Li/Li+ at 0.1A /g. Both electrodes were assembled in the same hybrid cell to build a LIC/NIC. For comparison purposes, a symmetric EDLC supercapacitor cell using the same activated carbon in 1.5M Et4NBF4 electrolyte was also built. Both LIC & NIC demonstrates considerable improvements in the energy density over its EDLC counterpart, delivering a maximum energy density of 110Wh/Kg at a power density of 30W/kg AM and a maximum power density of 6200W/Kg at an energy density of 27 Wh/Kg in the case of NIC and a maximum energy density of 110Wh/Kg at a power density of 30W/kg and a maximum power density of 18000W/Kg at an energy density of 22 Wh/Kg in the case of LIC. In conclusion, our work demonstrates that the same biomass waste can be adapted to offer a hybrid capacitor/battery storage device overcoming the limited energy density of corresponding double layer capacitors.

Keywords: hybrid supercapacitor, Na-Ion capacitor, supercapacitor, Li-Ion capacitor, EDLC

Procedia PDF Downloads 181
15010 Functional Instruction Set Simulator (ISS) of a Neural Network (NN) IP with Native BF-16 Generator

Authors: Debajyoti Mukherjee, Arathy B. S., Arpita Sahu, Saranga P. Pogula

Abstract:

A Functional Model to mimic the functional correctness of a Neural Network Compute Accelerator IP is very crucial for design validation. Neural network workloads are based on a Brain Floating Point (BF-16) data type. The major challenge we were facing was the incompatibility of gcc compilers to BF-16 datatype, which we addressed with a native BF-16 generator integrated to our functional model. Moreover, working with big GEMM (General Matrix Multiplication) or SpMM (Sparse Matrix Multiplication) Work Loads (Dense or Sparse) and debugging the failures related to data integrity is highly painstaking. In this paper, we are addressing the quality challenge of such a complex Neural Network Accelerator design by proposing a Functional Model-based scoreboard or Software model using SystemC. The proposed Functional Model executes the assembly code based on the ISA of the processor IP, decodes all instructions, and executes as expected to be done by the DUT. The said model would give a lot of visibility and debug capability in the DUT bringing up micro-steps of execution.

Keywords: ISA (instruction set architecture), NN (neural network), TLM (transaction-level modeling), GEMM (general matrix multiplication)

Procedia PDF Downloads 62
15009 Histogenesis of the Stomach of Pre-Hatching Quail: A Light and Electron Microscopic Study

Authors: Soha A Soliman, Yasser A Ahmed, Mohamed A Khalaf

Abstract:

Although the enormous literature describing the histology of the stomach of different avian species during the posthatching development, the available literature on the pre-hatching development of quail stomach development is scanty. Thus, the current study was undertaken to provide a careful description of the main histological events during the embryonic development of quail stomach. To achieve this aim, daily histological specimens from the stomach of quail of 4 days post-incubation till the day 17 (few hours before hatching) were examined with light microscopy. The current study showed that the primitive gut tube of the embryonic quail appeared at the 4th day post incubation, and both parts of stomach (proventriculus and gizzard) were similar in structure and composed of endodermal epithelium of pseudostratified type surrounded by undifferentiated mesenchymal tissue. The sequences of the developmental events in the gut tube were preceded in a cranio-caudal pattern. By the 5th day, the endodermal covering of the primitive proventriculus gave rise to sac-like invaginations. The primitive gizzard was distinguished into thick-walled bodies and thin-walled sacs. In the 6th day, the prospective proventricular glandular epithelium became canalized and the muscular layer was developed in the cranial part of the proventriculus, whereas the primitive muscular coat of the gizzard was represented by a layer of condensed mesenchyme. In the 7th day, the proventricular glandular epithelial invaginations increased in depth and number, while, the muscularis mucosa and the muscular layer began to be distinguished. In the 8th day, the myoblasts differentiated into spindle shaped smooth muscle fibers. In the 10th day, branching of the proventricular glands began. The branching continued later on. The surface and the glandular epithelium were transformed into simple columnar type in the 12th day. The epithelial covering of the gizzard gave rise to tubular invaginations lined by simple cuboidal epithelium and the surface epithelium became simple columnar. Canalization of the tubular glands was recognized in the 14th day. In the 15th day, the proventricular surface epithelium invaginated in an concentric manner around a central cavity to form immature secretory units. The central cavity was lined by eosinophilic cells which form the ductal epithelia. The peripheral lamellae were lined by basophilic cells; the undifferentiated oxyntico-peptic cells. Entero-endocrine cells stained positive for silver impregnation in the proventricular glands. The mucosal folding in the gizzard appeared in the 15th day to form the plicae and the sulci. The wall of the proventriculus and gizzard in the 17th day acquired the main histological features of post-hatching birds, but neither the surface nor the ductal epithelium were differentiated to mucous producing cells. The current results shoed be considered in the molecular developmental studies.

Keywords: quail, proventriculus, gizzard, pre-hatching, histology

Procedia PDF Downloads 598
15008 Analysis of Kilistra (Gokyurt) Settlement within the Context of Traditional Residential Architecture

Authors: Esra Yaldız, Tugba Bulbul Bahtiyar, Dicle Aydın

Abstract:

Humans meet their need for shelter via housing which they structure in line with habits and necessities. In housing culture, traditional dwelling has an important role as a social and cultural transmitter. It provides concrete data by being planned in parallel with users’ life style and habits, having their own dynamics and components as well as their designs in harmony with nature, environment and the context they exist. Textures of traditional dwelling create a healthy and cozy living environment by means of adaptation to natural conditions, topography, climate, and context; utilization of construction materials found nearby and usage of traditional techniques and forms; and natural isolation of construction materials used. One of the examples of traditional settlements in Anatolia is Kilistra (Gökyurt) settlement of Konya province. Being among the important centers of Christianity in the past, besides having distinctive architecture, culture, natural features, and geographical differences (climate, geological structure, material), Kilistra can also be identified as a traditional settlement consisting of family, religious and economic structures as well as cultural interaction. The foundation of this study is the traditional residential texture of Kilistra with its unique features. The objective of this study is to assess the conformity of traditional residential texture of Kilistra with present topography, climatic data, and geographical values within the context of human scale construction, usage of green space, indigenous construction materials, construction form, building envelope, and space organization in housing.

Keywords: traditional residential architecture, Kilistra, Anatolia, Konya

Procedia PDF Downloads 386
15007 Flow Conservation Framework for Monitoring Software Defined Networks

Authors: Jesús Antonio Puente Fernández, Luis Javier Garcia Villalba

Abstract:

New trends on streaming videos such as series or films require a high demand of network resources. This fact results in a huge problem within traditional IP networks due to the rigidity of its architecture. In this way, Software Defined Networks (SDN) is a new concept of network architecture that intends to be more flexible and it simplifies the management in networks with respect to the existing ones. These aspects are possible due to the separation of control plane (controller) and data plane (switches). Taking the advantage of this separated control, it is easy to deploy a monitoring tool independent of device vendors since the existing ones are dependent on the installation of specialized and expensive hardware. In this paper, we propose a framework that optimizes the traffic monitoring in SDN networks that decreases the number of monitoring queries to improve the network traffic and also reduces the overload. The performed experiments (with and without the optimization) using a video streaming delivery between two hosts demonstrate the feasibility of our monitoring proposal.

Keywords: optimization, monitoring, software defined networking, statistics, query

Procedia PDF Downloads 307
15006 Process Driven Architecture For The ‘Lessons Learnt’ Knowledge Sharing Framework: The Case Of A ‘Lessons Learnt’ Framework For KOC

Authors: Rima Al-Awadhi, Abdul Jaleel Tharayil

Abstract:

On a regular basis, KOC engages into various types of Projects. However, due to very nature and complexity involved, each project experience generates a lot of ‘learnings’ that need to be factored into while drafting a new contract and thus avoid repeating the same mistakes. But, many a time these learnings are localized and remain as tacit leading to scope re-work, larger cycle time, schedule overrun, adjustment orders and claims. Also, these experiences are not readily available to new employees leading to steep learning curve and longer time to competency. This is to share our experience in designing and implementing a process driven architecture for the ‘lessons learnt’ knowledge sharing framework in KOC. It high-lights the ‘lessons learnt’ sharing process adopted, integration with the organizational processes, governance framework, the challenges faced and learning from our experience in implementing a ‘lessons learnt’ framework.

Keywords: lessons learnt, knowledge transfer, knowledge sharing, successful practices, Lessons Learnt Workshop, governance framework

Procedia PDF Downloads 558
15005 Single Tuned Shunt Passive Filter Based Current Harmonic Elimination of Three Phase AC-DC Converters

Authors: Mansoor Soomro

Abstract:

The evolution of power electronic equipment has been pivotal in making industrial processes productive, efficient and safe. Despite its attractive features, it has been due to nonlinear loads which make it vulnerable to power quality conditions. Harmonics is one of the power quality problem in which the harmonic frequency is integral multiple of supply frequency. Therefore, the supply voltage and supply frequency do not last within their tolerable limits. As a result, distorted current and voltage waveform may appear. Attributes of low power quality confirm that an electrical device or equipment is likely to malfunction, fail promptly or unable to operate under all applied conditions. The electrical power system is designed for delivering power reliably, namely maximizing power availability to customers. However, power quality events are largely untracked, and as a result, can take out a process as many as 20 to 30 times a year, costing utilities, customers and suppliers of load equipment, a loss of millions of dollars. The ill effects of current harmonics reduce system efficiency, cause overheating of connected equipment, result increase in electrical power and air conditioning costs. With the passage of time and the rapid growth of power electronic converters has highlighted the damages of current harmonics in the electrical power system. Therefore, it has become essential to address the bad influence of current harmonics while planning any suitable changes in the electrical installations. In this paper, an effort has been made to mitigate the effects of dominant 3rd order current harmonics. Passive filtering technique with six pulse multiplication converter has been employed to mitigate them. Since, the standards of power quality are to maintain the supply voltage and supply current within certain prescribed standard limits. For this purpose, the obtained results are validated as per specifications of IEEE 519-1992 and IEEE 519-2014 performance standards.

Keywords: current harmonics, power quality, passive filters, power electronic converters

Procedia PDF Downloads 283
15004 Transient Enhanced LDO Voltage Regulator with Improved Feed Forward Path Compensation

Authors: A. Suresh, Sreehari Rao Patri, K. S. R. Krishnaprasad

Abstract:

An ultra low power capacitor less low-dropout voltage regulator with improved transient response using gain enhanced feed forward path compensation is presented in this paper. It is based on a cascade of a voltage amplifier and a transconductor stage in the feed forward path with regular error amplifier to form a composite gain-enhanced feed forward stage. It broadens the gain bandwidth and thus improves the transient response without substantial increase in power consumption. The proposed LDO, designed for a maximum output current of 100 mA in UMC 180 nm, requires a quiescent current of 69 µA. An undershoot of 153.79mV for a load current changes from 0mA to 100mA and an overshoot of 196.24mV for current change of 100mA to 0mA. The settling time is approximately 1.1 µs for the output voltage undershoot case. The load regulation is of 2.77 µV/mA at load current of 100mA. Reference voltage is generated by using an accurate band gap reference circuit of 0.8V.The costly features of SOC such as total chip area and power consumption is drastically reduced by the use of only a total compensation capacitance of 6pF while consuming power consumption of 0.096 mW.

Keywords: capacitor-less LDO, frequency compensation, transient response, latch, self-biased differential amplifier

Procedia PDF Downloads 434
15003 Adopting Cloud-Based Techniques to Reduce Energy Consumption: Toward a Greener Cloud

Authors: Sandesh Achar

Abstract:

The cloud computing industry has set new goals for better service delivery and deployment, so anyone can access services such as computation, application, and storage anytime. Cloud computing promises new possibilities for approaching sustainable solutions to deploy and advance their services in this distributed environment. This work explores energy-efficient approaches and how cloud-based architecture can reduce energy consumption levels amongst enterprises leveraging cloud computing services. Adopting cloud-based networking, database, and server machines provide a comprehensive means of achieving the potential gains in energy efficiency that cloud computing offers. In energy-efficient cloud computing, virtualization is one aspect that can integrate several technologies to achieve consolidation and better resource utilization. Moreover, the Green Cloud Architecture for cloud data centers is discussed in terms of cost, performance, and energy consumption, and appropriate solutions for various application areas are provided.

Keywords: greener cloud, cloud computing, energy efficiency, energy consumption, metadata tags, green cloud advisor

Procedia PDF Downloads 61