Search results for: central processing unit
6310 Full-Wave Analysis of Magnetic Meta-Surfaces for Microwave Component Applications
Authors: Christopher Hardly Joseph, Nicola Pelagalli, Davide Mencarelli, Luca Pierantoni
Abstract:
In this contribution, we report the electromagnetic response of a split ring resonator (SRR) based magnetic metamaterial unit cell in free space nature by means of a full-wave electromagnetic simulation. The effective parameters of these designed structures have been analyzed. The structures have been specifically designed to work at high frequency considering the development of many microwave and lower mm-wave devices. In addition to that, the application of the designed metamaterial structures is also proposed, namely metamaterial loaded planar transmission lines, potentially useful to optimize size and quality factor of circuit components and radiating elements.Keywords: CPW, Microwave Components, Negative Permeability, Split Ring Resonator (SRR)
Procedia PDF Downloads 1826309 Coupled Effect of Pulsed Current and Stress State on Fracture Behavior of Ultrathin Superalloy Sheet
Authors: Shuangxin Wu
Abstract:
Superalloy ultra-thin-walled components occupy a considerable proportion of aero engines and play an increasingly important role in structural weight reduction and performance improvement. To solve problems such as high deformation resistance and poor formability at room temperature, the introduction of pulse current in the processing process can improve the plasticity of metal materials, but the influence mechanism of pulse current on the forming limit of superalloy ultra-thin sheet is not clear, which is of great significance for determining the material processing window and improving the micro-forming process. The effect of pulse current on the microstructure evolution of superalloy thin plates was observed by optical microscopy (OM) and X-ray diffraction topography (XRT) by applying pulse current to GH3039 with a thickness of 0.2mm under plane strain and uniaxial tensile states. Compared with the specimen without pulse current applied at the same temperature, the internal void volume fraction is significantly reduced, reflecting the non-thermal effect of pulse current on the growth of micro-pores. ED (electrically deforming) specimens have larger and deeper dimples, but the elongation is not significantly improved because the pulse current promotes the void coalescence process, resulting in material fracture. The electro-plastic phenomenon is more obvious in the plane strain state, which is closely related to the effect of stress triaxial degree on the void evolution under pulsed current.Keywords: pulse current, superalloy, ductile fracture, void damage
Procedia PDF Downloads 796308 A Study on ESD Protection Circuit Applying Silicon Controlled Rectifier-Based Stack Technology with High Holding Voltage
Authors: Hee-Guk Chae, Bo-Bae Song, Kyoung-Il Do, Jeong-Yun Seo, Yong-Seo Koo
Abstract:
In this study, an improved Electrostatic Discharge (ESD) protection circuit with low trigger voltage and high holding voltage is proposed. ESD has become a serious problem in the semiconductor process because the semiconductor density has become very high these days. Therefore, much research has been done to prevent ESD. The proposed circuit is a stacked structure of the new unit structure combined by the Zener Triggering (SCR ZTSCR) and the High Holding Voltage SCR (HHVSCR). The simulation results show that the proposed circuit has low trigger voltage and high holding voltage. And the stack technology is applied to adjust the various operating voltage. As the results, the holding voltage is 7.7 V for 2-stack and 10.7 V for 3-stack.Keywords: ESD, SCR, latch-up, power clamp, holding voltage
Procedia PDF Downloads 5526307 Effect of Fermentation Time on Some Functional Properties of Moringa (Moringa oleifera) Seed Flour
Authors: Ocheme B. Ocheme, Omobolanle O. Oloyede, S. James, Eleojo V. Akpa
Abstract:
The effect of fermentation time on some functional properties of Moringa (Moringa oleifera) seed flour was examined. Fermentation, an effective processing method used to improve nutritional quality of plant foods, tends to affect the characteristics of food components and their behaviour in food systems just like other processing methods. Hence the need for this study. Moringa seeds were fermented naturally by soaking in potable water and allowing it to stand for 12, 24, 48 and 72 hours. At the end of fermentation, the seeds were oven dried at 600C for 12 hours and then milled into flour. Flour obtained from unfermented seeds served as control: hence a total of five flour samples. The functional properties were analyzed using standard methods. Fermentation significantly (p<0.05) increased the water holding capacity of Moringa seed flour from 0.86g/g - 2.31g/g. The highest value was observed after 48 hours of fermentation The same trend was observed for oil absorption capacity with values between 0.87 and 1.91g/g. Flour from unfermented Moringa seeds had a bulk density of 0.60g/cm3 which was significantly (p<0.05) higher than the bulk densities of flours from seeds fermented for 12, 24 and 48. Fermentation significantly (p<0.05) decreased the dispersibility of Moringa seed flours from 36% to 21, 24, 29 and 20% after 12, 24, 48 and 72 hours of fermentation respectively. The flours’ emulsifying capacities increased significantly (p<0.05) with increasing fermentation time with values between 50 – 68%. The flour obtained from seeds fermented for 12 hours had a significantly (p<0.05) higher foaming capacity of 16% while the flour obtained from seeds fermented for 0, 24 and 72 hours had the least foaming capacities of 9%. Flours from seeds fermented for 12 and 48 hours had better functional properties than flours from seeds fermented for 24 and 72 hours.Keywords: fermentation, flour, functional properties, Moringa
Procedia PDF Downloads 6966306 How Does Ethics Impact Marketing Decision Making of a Company: An Evidence from the Telecommunication Sector of Pakistan
Authors: Mohammad Daud Ali
Abstract:
For the past decade, marketing ethics has been a central point for academic researchers and practitioners. In particular, the development of frameworks and models to help in the analysis of marketing decisions are the focus of research. The current study aims at finding whether ethical decisions (honesty, fairness, responsibility, and respect) affect organizational marketing decisions. A selection of 250 respondents was purposely made from the telecommunication industry of Pakistan, out of which 204 responses were induced at an acceptable rate of 81.6%. A five-point Likert Scale, itemized with 12 items, was adopted from Taylor-Dunlop & Lester (2000) and used to draw responses regarding ethics.Keywords: marketing, ethics, decisions making, telecommunication, Pakistan
Procedia PDF Downloads 1036305 An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia
Authors: Hammad T. Janjuhah, Josep Sanjuan, Mohamed K. Salah
Abstract:
Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.Keywords: overview of porosity classification, reservoir characterization, microporosity, carbonate reservoir
Procedia PDF Downloads 1596304 Distributed Cost-Based Scheduling in Cloud Computing Environment
Authors: Rupali, Anil Kumar Jaiswal
Abstract:
Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc. Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively. Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.Keywords: physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model
Procedia PDF Downloads 1706303 Reactive Analysis of Different Protocol in Mobile Ad Hoc Network
Authors: Manoj Kumar
Abstract:
Routing protocols have a central role in any mobile ad hoc network (MANET). There are many routing protocols that exhibit different performance levels in different scenarios. In this paper, we compare AODV, DSDV, DSR, and ZRP routing protocol in mobile ad hoc networks to determine the best operational conditions for each protocol. We analyze these routing protocols by extensive simulations in OPNET simulator and show how to pause time and the number of nodes affect their performance. In this study, performance is measured in terms of control traffic received, control traffic sent, data traffic received, sent data traffic, throughput, retransmission attempts.Keywords: AODV, DSDV, DSR, ZRP
Procedia PDF Downloads 5226302 Implementing a Screening Tool to Assist with Palliative Care Consultation in Adult Non-ICU Patients
Authors: Cassey Younghans
Abstract:
Background: Current health care trends demonstrate that there is an increasing number of patients being hospitalized with complex comorbidities. These complex needs require advanced therapies, and treatment goals often focus on doing everything possible to prolong life rather than focusing on the individual patient’s quality of life which is the goal of palliative care efforts. Patients benefit from palliative care in the early stages of the illness rather than after the disease progressed or the state of acuity has advanced. The clinical problem identified was that palliative care was not being implemented early enough in the disease process with patients who had complex medical conditions and who would benefit from the philosophy and skills of palliative care professionals. Purpose: The purpose of this quality improvement study was to increase the number of palliative care screenings and consults completed on adults after being admitted to one Non-ICU and Non-COVID hospital unit. Methods: A retrospective chart review assessing for possible missed opportunities to introduce palliation was performed for patients with six primary diagnoses, including heart failure, liver failure, end stage renal disease, chronic obstructive pulmonary disease, cerebrovascular accident, and cancer in a population of adults over the ago of 19 on one medical-surgical unit over a three-month period prior to the intervention. An educational session with the nurses on the benefits of palliative care was conducted by the researcher, and a screening tool was implemented. The expected outcome was to have an increase in early palliative care consultation with patients with complex comorbid conditions and a decrease in missed opportunities for the implementation of palliative care. Another retrospective chart review was completed following completion of the three month piloting of the tool. Results: During the retrospective chart review, 46 patients were admitted to the medical-surgical floor with the primary diagnoses identified in the inclusion criteria. Six patients had palliative care consults completed during that time. Twenty-two palliative care screening tools were completed during the intervention period. Of those, 15 of the patients scored a 7 or higher, suggesting that a palliative care consultation was warranted. The final retrospective chart review identified that 4 palliative consults were implemented during that time of the 31 patients who were admitted over the three month time frame. Conclusion: Educating nurses and implementing a palliative care screening upon admission can be of great value in providing early identification of patients who might benefit from palliative care. Recommendations – It is recommended that this screening tool should be used to help identify the patents of whom would benefit from a palliative care consult, and nurses would be able to initiated a palliative care consultation themselves.Keywords: palliative care, screening, early, palliative care consult
Procedia PDF Downloads 1556301 Influence of Controlled Retting on the Quality of the Hemp Fibres Harvested at the Seed Maturity by Using a Designed Lab-Scale Pilot Unit
Authors: Brahim Mazian, Anne Bergeret, Jean-Charles Benezet, Sandrine Bayle, Luc Malhautier
Abstract:
Hemp fibers are increasingly used as reinforcements in polymer matrix composites due to their competitive performance (low density, mechanical properties and biodegradability) compared to conventional fibres such as glass fibers. However, the huge variation of their biochemical, physical and mechanical properties limits the use of these natural fibres in structural applications when high consistency and homogeneity are required. In the hemp industry, traditional processes termed field retting are commonly used to facilitate the extraction and separation of stem fibers. This retting treatment consists to spread out the stems on the ground for a duration ranging from a few days to several weeks. Microorganisms (fungi and bacteria) grow on the stem surface and produce enzymes that degrade pectinolytic substances in the middle lamellae surrounding the fibers. This operation depends on the weather conditions and is currently carried out very empirically in the fields so that a large variability in the hemp fibers quality (mechanical properties, color, morphology, chemical composition…) is resulting. Nonetheless, if controlled, retting might be favorable for good properties of hemp fibers and then of hemp fibers reinforced composites. Therefore, the present study aims to investigate the influence of controlled retting within a designed environmental chamber (lab-scale pilot unit) on the quality of the hemp fibres harvested at the seed maturity growth stage. Various assessments were applied directly on fibers: color observations, morphological (optical microscope), surface (ESEM), biochemical (gravimetry) analysis, spectrocolorimetric measurements (pectins content), thermogravimetric analysis (TGA) and tensile testing. The results reveal that controlled retting leads to a rapid change of color from yellow to dark grey due to development of microbial communities (fungi and bacteria) at the stem surface. An increase of thermal stability of fibres due to the removal of non-cellulosic components along retting is also observed. A separation of bast fibers to elementary fibers occurred with an evolution of chemical composition (degradation of pectins) and a rapid decrease in tensile properties (380MPa to 170MPa after 3 weeks) due to accelerated retting process. The influence of controlled retting on the biocomposite material (PP / hemp fibers) properties is under investigation.Keywords: controlled retting, hemp fibre, mechanical properties, thermal stability
Procedia PDF Downloads 1586300 Analysis of a CO₂ Two-Phase Ejector Performances with Taguchi and Anova Optimization
Authors: Karima Megdouli
Abstract:
The ejector, a central element within the CO₂ transcritical ejection refrigeration system, holds significant importance in enhancing refrigeration capacity and minimizing compressor power usage. This study's objective is to introduce a technique for enhancing the effectiveness of the CO₂ transcritical two-phase ejector, utilizing Taguchi and ANOVA analysis. The investigation delves into the impact of geometric parameters, secondary flow temperature, and primary flow pressure on the efficiency of the ejector. Results indicate that employing a combination of Taguchi and ANOVA offers increased reliability and superior performance when optimizing the design of the CO₂ two-phase ejector.Keywords: ejector, supersonic, Taguchi, ANOVA, optimization
Procedia PDF Downloads 926299 A Hebbian Neural Network Model of the Stroop Effect
Authors: Vadim Kulikov
Abstract:
The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop
Procedia PDF Downloads 2716298 Time Compression in Engineer-to-Order Industry: A Case Study of a Norwegian Shipbuilding Industry
Authors: Tarek Fatouh, Chehab Elbelehy, Alaa Abdelsalam, Eman Elakkad, Alaa Abdelshafie
Abstract:
This paper aims to explore the possibility of time compression in Engineer to Order production networks. A case study research method is used in a Norwegian shipbuilding project by implementing a value stream mapping lean tool with total cycle time as a unit of analysis. The analysis resulted in demonstrating the time deviations for the planned tasks in one of the processes in the shipbuilding project. So, authors developed a future state map by removing time wastes from value stream process.Keywords: engineer to order, total cycle time, value stream mapping, shipbuilding
Procedia PDF Downloads 1686297 Reliability Analysis: A Case Study in Designing Power Distribution System of Tehran Oil Refinery
Authors: A. B. Arani, R. Shojaee
Abstract:
Electrical power distribution system is one of the vital infrastructures of an oil refinery, which requires wide area of study and planning before construction. In this paper, power distribution reliability of Tehran Refinery’s KHDS/GHDS unit has been taken into consideration to investigate the importance of these kinds of studies and evaluate the designed system. In this regard, the authors chose and evaluated different configurations of electrical power distribution along with the existing configuration with the aim of finding the most suited configuration which satisfies the conditions of minimum cost of electrical system construction, minimum cost imposed by loss of load, and maximum power system reliability.Keywords: power distribution system, oil refinery, reliability, investment cost, interruption cost
Procedia PDF Downloads 8786296 Quantification Model for Capability Evaluation of Optical-Based in-Situ Monitoring System for Laser Powder Bed Fusion (LPBF) Process
Authors: Song Zhang, Hui Wang, Johannes Henrich Schleifenbaum
Abstract:
Due to the increasing demand for quality assurance and reliability for additive manufacturing, the development of an advanced in-situ monitoring system is required to monitor the process anomalies as input for further process control. Optical-based monitoring systems, such as CMOS cameras and NIR cameras, are proved as effective ways to monitor the geometrical distortion and exceptional thermal distribution. Therefore, many studies and applications are focusing on the availability of the optical-based monitoring system for detecting varied types of defects. However, the capability of the monitoring setup is not quantified. In this study, a quantification model to evaluate the capability of the monitoring setups for the LPBF machine based on acquired monitoring data of a designed test artifact is presented, while the design of the relevant test artifacts is discussed. The monitoring setup is evaluated based on its hardware properties, location of the integration, and light condition. Methodology of data processing to quantify the capacity for each aspect is discussed. The minimal capability of the detectable size of the monitoring set up in the application is estimated by quantifying its resolution and accuracy. The quantification model is validated using a CCD camera-based monitoring system for LPBF machines in the laboratory with different setups. The result shows the model to quantify the monitoring system's performance, which makes the evaluation of monitoring systems with the same concept but different setups possible for the LPBF process and provides the direction to improve the setups.Keywords: data processing, in-situ monitoring, LPBF process, optical system, quantization model, test artifact
Procedia PDF Downloads 2006295 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 836294 Metallacyclodimeric Array Containing Both Suprachannels and Cages: Selective Reservoir and Recognition of Diiodomethane
Authors: Daseul Lee, Jeong Jun Lee, Ok-Sang Jung
Abstract:
Self-assembly of a series of ZnX2 (X- = Cl-, Br-, and I-) with 2,3-bis(4’-nicotinamidephenoxy)naphthalene (L) as a new bidentate pyridyl-donor ligand yields systematic metallacyclodimeric unit, [ZnX2L]2. The supramolecule constitutes a characteristically stacked forming both 1D suprachannels and cages. Weak C-H⋯π and inter-digitated π⋯π interactions are main driving forces in the formation of both suprachannels and cages. The slightly different features between the suprachannel and cage have been investigated by 1H NMR and TG analysis, which solvent quantitatively exchange within only suprachannels. Photo-unstable CH2I2 molecules are stabilized via capturing within suprachannels, which is monitored by UV-Vis spectroscopy. Furthermore, the photoluminescence intensity, from the chromophore naphthyl moiety of [ZnCl2L]2, gradually decreases with the addition of CH2I2. And washing off the CH2I2 by dichloromethane returned the PL intensity back to its approximately original signal.Keywords: metallacyclodimer, suprachannel, π⋯π interaction, molecular recognition
Procedia PDF Downloads 3246293 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 636292 The Differences and Similarities in Neurocognitive Deficits in Mild Traumatic Brain Injury and Depression
Authors: Boris Ershov
Abstract:
Depression is the most common mood disorder experienced by patients who have sustained a traumatic brain injury (TBI) and is associated with poorer cognitive functional outcomes. However, in some cases, similar cognitive impairments can also be observed in depression. There is not enough information about the features of the cognitive deficit in patients with TBI in relation to patients with depression. TBI patients without depressive symptoms (TBInD, n25), TBI patients with depressive symptoms (TBID, n31), and 28 patients with bipolar II disorder (BP) were included in the study. There were no significant differences in participants in respect to age, handedness and educational level. The patients clinical status was determined by using Montgomery–Asberg Depression Rating Scale (MADRS). All participants completed a cognitive battery (The Brief Assessment of Cognition in Affective Disorders (BAC-A)). Additionally, the Rey–Osterrieth Complex Figure (ROCF) was used to assess visuospatial construction abilities and visual memory, as well as planning and organizational skills. Compared to BP, TBInD and TBID showed a significant impairments in visuomotor abilities, verbal and visual memory. There were no significant differences between BP and TBID groups in working memory, speed of information processing, problem solving. Interference effect (cognitive inhibition) was significantly greater in TBInD and TBID compared to BP. Memory bias towards mood-related information in BP and TBID was greater in comparison with TBInD. These results suggest that depressive symptoms are associated with impairments some executive functions in combination at decrease of speed of information processing.Keywords: bipolar II disorder, depression, neurocognitive deficits, traumatic brain injury
Procedia PDF Downloads 3506291 A Review on Cloud Computing and Internet of Things
Authors: Sahar S. Tabrizi, Dogan Ibrahim
Abstract:
Cloud Computing is a convenient model for on-demand networks that uses shared pools of virtual configurable computing resources, such as servers, networks, storage devices, applications, etc. The cloud serves as an environment for companies and organizations to use infrastructure resources without making any purchases and they can access such resources wherever and whenever they need. Cloud computing is useful to overcome a number of problems in various Information Technology (IT) domains such as Geographical Information Systems (GIS), Scientific Research, e-Governance Systems, Decision Support Systems, ERP, Web Application Development, Mobile Technology, etc. Companies can use Cloud Computing services to store large amounts of data that can be accessed from anywhere on Earth and also at any time. Such services are rented by the client companies where the actual rent depends upon the amount of data stored on the cloud and also the amount of processing power used in a given time period. The resources offered by the cloud service companies are flexible in the sense that the user companies can increase or decrease their storage requirements or the processing power requirements at any time, thus minimizing the overall rental cost of the service they receive. In addition, the Cloud Computing service providers offer fast processors and applications software that can be shared by their clients. This is especially important for small companies with limited budgets which cannot afford to purchase their own expensive hardware and software. This paper is an overview of the Cloud Computing, giving its types, principles, advantages, and disadvantages. In addition, the paper gives some example engineering applications of Cloud Computing and makes suggestions for possible future applications in the field of engineering.Keywords: cloud computing, cloud systems, cloud services, IaaS, PaaS, SaaS
Procedia PDF Downloads 2346290 An Image Processing Scheme for Skin Fungal Disease Identification
Authors: A. A. M. A. S. S. Perera, L. A. Ranasinghe, T. K. H. Nimeshika, D. M. Dhanushka Dissanayake, Namalie Walgampaya
Abstract:
Nowadays, skin fungal diseases are mostly found in people of tropical countries like Sri Lanka. A skin fungal disease is a particular kind of illness caused by fungus. These diseases have various dangerous effects on the skin and keep on spreading over time. It becomes important to identify these diseases at their initial stage to control it from spreading. This paper presents an automated skin fungal disease identification system implemented to speed up the diagnosis process by identifying skin fungal infections in digital images. An image of the diseased skin lesion is acquired and a comprehensive computer vision and image processing scheme is used to process the image for the disease identification. This includes colour analysis using RGB and HSV colour models, texture classification using Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix and Local Binary Pattern, Object detection, Shape Identification and many more. This paper presents the approach and its outcome for identification of four most common skin fungal infections, namely, Tinea Corporis, Sporotrichosis, Malassezia and Onychomycosis. The main intention of this research is to provide an automated skin fungal disease identification system that increase the diagnostic quality, shorten the time-to-diagnosis and improve the efficiency of detection and successful treatment for skin fungal diseases.Keywords: Circularity Index, Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix, Local Binary Pattern, Object detection, Ring Detection, Shape Identification
Procedia PDF Downloads 2356289 Predictors of Glycaemic Variability and Its Association with Mortality in Critically Ill Patients with or without Diabetes
Authors: Haoming Ma, Guo Yu, Peiru Zhou
Abstract:
Background: Previous studies show that dysglycemia, mostly hyperglycemia, hypoglycemia and glycemic variability(GV), are associated with excess mortality in critically ill patients, especially those without diabetes. Glycemic variability is an increasingly important measure of glucose control in the intensive care unit (ICU) due to this association. However, there is limited data pertaining to the relationship between different clinical factors and glycemic variability and clinical outcomes categorized by their DM status. This retrospective study of 958 intensive care unit(ICU) patients was conducted to investigate the relationship between GV and outcome in critically ill patients and further to determine the significant factors that contribute to the glycemic variability. Aim: We hypothesize that the factors contributing to mortality and the glycemic variability are different from critically ill patients with or without diabetes. And the primary aim of this study was to determine which dysglycemia (hyperglycemia\hypoglycemia\glycemic variability) is independently associated with an increase in mortality among critically ill patients in different groups (DM/Non-DM). Secondary objectives were to further investigate any factors affecting the glycemic variability in two groups. Method: A total of 958 diabetic and non-diabetic patients with severe diseases in the ICU were selected for this retrospective analysis. The glycemic variability was defined as the coefficient of variation (CV) of blood glucose. The main outcome was death during hospitalization. The secondary outcome was GV. The logistic regression model was used to identify factors associated with mortality. The relationships between GV and other variables were investigated using linear regression analysis. Results: Information on age, APACHE II score, GV, gender, in-ICU treatment and nutrition was available for 958 subjects. Predictors remaining in the final logistic regression model for mortality were significantly different in DM/Non-DM groups. Glycemic variability was associated with an increase in mortality in both DM(odds ratio 1.05; 95%CI:1.03-1.08,p<0.001) or Non-DM group(odds ratio 1.07; 95%CI:1.03-1.11,p=0.002). For critically ill patients without diabetes, factors associated with glycemic variability included APACHE II score(regression coefficient, 95%CI:0.29,0.22-0.36,p<0.001), Mean BG(0.73,0.46-1.01,p<0.001), total parenteral nutrition(2.87,1.57-4.17,p<0.001), serum albumin(-0.18,-0.271 to -0.082,p<0.001), insulin treatment(2.18,0.81-3.55,p=0.002) and duration of ventilation(0.006,0.002-1.010,p=0.003).However, for diabetes patients, APACHE II score(0.203,0.096-0.310,p<0.001), mean BG(0.503,0.138-0.869,p=0.007) and duration of diabetes(0.167,0.033-0.301,p=0.015) remained as independent risk factors of GV. Conclusion: We found that the relation between dysglycemia and mortality is different in the diabetes and non-diabetes groups. And we confirm that GV was associated with excess mortality in DM or Non-DM patients. Furthermore, APACHE II score, Mean BG, total parenteral nutrition, serum albumin, insulin treatment and duration of ventilation were significantly associated with an increase in GV in Non-DM patients. While APACHE II score, mean BG and duration of diabetes (years) remained as independent risk factors of increased GV in DM patients. These findings provide important context for further prospective trials investigating the effect of different clinical factors in critically ill patients with or without diabetes.Keywords: diabetes, glycemic variability, predictors, severe disease
Procedia PDF Downloads 1926288 Choice of Sleeper and Rail Fastening Using Linear Programming Technique
Authors: Luciano Oliveira, Elsa Vásquez-Alvarez
Abstract:
The increase in rail freight transport in Brazil in recent years requires new railway lines and the maintenance of existing ones, which generates high costs for concessionaires. It is in this context that this work is inserted, whose objective is to propose a method that uses Binary Linear Programming for the choice of sleeper and rail fastening, from various options, including the way to apply these materials, with focus to minimize costs. Unit value information, the life cycle each of material type, and service expenses are considered. The model was implemented in commercial software using real data for its validation. The formulated model can be replicated to support decision-making for other railway projects in the choice of sleepers and rail fastening with lowest cost.Keywords: linear programming, rail fastening, rail sleeper, railway
Procedia PDF Downloads 2056287 Challenging Shariah-Compliant Contract: A Latest Insight into the Malaysian Court Cases
Authors: Noor Suhaida Kasri
Abstract:
In the last three decades, Malaysia has developed fundamental legal and regulatory structures that aim to accommodate and facilitate the growth of Islamic banking and finance industry. Important building blocks have been put in place, to cite a few, the elevation of the position of the Malaysian Central Bank Shariah Advisory Council (SAC) as the apex advisory body and the empowerment of their Shariah resolutions through the Central Bank Act 1958; the promulgation of the Islamic Financial Services Act 2013 that regulate and govern Islamic finance market with a robust statutory requirement of Shariah governance and Shariah compliance. Notwithstanding these achievements, enforceability of Shariah-compliant contract remains a contentious subject. The validity of Al Bai Bithaman Ajil concept that was commonly used by the Islamic financial institutions in their financing facilities structures and documentation has been unabatedly challenged by the customers in courts. The challenge was due to the manner in which the Al Bai Bithaman Ajil transactions were carried out. Due to this legal challenge, Al Bai Bithaman Ajil financing structure seems to no longer be the practitioners’ favourite in Malaysia, though its substitute tawarruq and commodity murabahah financing structure may potentially face similar legal challenges. This paper examines the legal challenges affecting the enforceability of these underlying Shariah contracts. The examination of these cases highlights the manner in which these contracts were being implemented and applied by the Malaysian Islamic financial institutions that triggered Shariah and legal concern. The analysis also highlights the approach adopted by the Malaysian courts in determining the Shariah issues as well as the SAC in ascertaining the rulings on the Shariah issues referred to it by the courts. The paper adopts a qualitative research methodology by using textual and documentary analysis approach. The outcome of this study underlines factors that require consideration by industry stakeholder in order to ameliorate the efficacy of the existing building blocks that would eventually strengthens the validity and enforceability of Shariah-compliant contracts. This, in the long run, will further reinforce financial stability and trust into the Islamic banking and finance industry in Malaysia.Keywords: enforceability of Shariah compliant contract, legal challenge, legal and regulatory framework, Shariah Advisory Council
Procedia PDF Downloads 2366286 Close-Reading Works of Art and the Ideal of Naïveté: Elements of an Anti-Cartesian Approach to Humanistic Liberal Education
Authors: Peter Hajnal
Abstract:
The need to combine serious training in disciplinary/scholarly approaches to problems of general significance with an educational experience that engages students with these very same problems on a personal level is one of the key challenges facing modern liberal education in the West. The typical approach to synthesizing these two goals, one highly abstract, the other elusively practical, proceeds by invoking ideals traditionally associated with Enlightenment and 19th century “humanism”. These ideas are in turn rooted in an approach to reality codified by Cartesianism and the rise of modern science. Articulating this connection of the modern humanist tradition with Cartesianism allows for demonstrating how the central problem of modern liberal education is rooted in the strict separation of knowledge and personal experience inherent in the dualism of Descartes. The question about the shape of contemporary liberal education is, therefore, the same as asking whether an anti-Cartesian version of liberal education is possible at all. Although the formulation of a general answer to this question is a tall order (whether in abstract or practical terms), and might take different forms (nota bene in Eastern and Western contexts), a key inspiration may be provided by a certain shift of attitude towards the Cartesian conception of the relationship of knowledge and experience required by discussion based close-reading of works of visual art. Taking the work of Stanley Cavell as its central inspiration, my paper argues that this shift of attitude in question is best described as a form of “second naïveté”, and that it provides a useful model of conceptualizing in more concrete terms the appeal for such a “second naïveté” expressed in recent writings on the role of various disciplines in organizing learning by philosophers of such diverse backgrounds and interests as Hilary Putnam and Bruno Latour. The adoption of naïveté so identified as an educational ideal may be seen as a key instrument in thinking of the educational context as itself a medium of synthesis of the contemplative and the practical. Moreover, it is helpful in overcoming the bad dilemma of ideological vs. conservative approaches to liberal education, as well as in correcting a certain commonly held false view of the historical roots of liberal education in the Renaissance, which turns out to offer much more of a sui generis approach to practice rather than represent a mere precursor to the Cartesian conception.Keywords: liberal arts, philosophy, education, Descartes, naivete
Procedia PDF Downloads 1936285 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling
Authors: Erfan Niazi, Marianne Fenech
Abstract:
Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.Keywords: red blood cell, rouleaux, microfluidics, image processing, population balance modeling
Procedia PDF Downloads 3606284 Automated Java Testing: JUnit versus AspectJ
Authors: Manish Jain, Dinesh Gopalani
Abstract:
Growing dependency of mankind on software technology increases the need for thorough testing of the software applications and automated testing techniques that support testing activities. We have outlined our testing strategy for performing various types of automated testing of Java applications using AspectJ which has become the de-facto standard for Aspect Oriented Programming (AOP). Likewise JUnit, a unit testing framework is the most popular Java testing tool. In this paper, we have evaluated our proposed AOP approach for automated testing and JUnit on various parameters. First we have provided the similarity between the two approaches and then we have done a detailed comparison of the two testing techniques on factors like lines of testing code, learning curve, testing of private members etc. We established that our AOP testing approach using AspectJ has got several advantages and is thus particularly more effective than JUnit.Keywords: aspect oriented programming, AspectJ, aspects, JU-nit, software testing
Procedia PDF Downloads 3356283 Using Artificial Intelligence Technology to Build the User-Oriented Platform for Integrated Archival Service
Authors: Lai Wenfang
Abstract:
Tthis study will describe how to use artificial intelligence (AI) technology to build the user-oriented platform for integrated archival service. The platform will be launched in 2020 by the National Archives Administration (NAA) in Taiwan. With the progression of information communication technology (ICT) the NAA has built many systems to provide archival service. In order to cope with new challenges, such as new ICT, artificial intelligence or blockchain etc. the NAA will try to use the natural language processing (NLP) and machine learning (ML) skill to build a training model and propose suggestions based on the data sent to the platform. NAA expects the platform not only can automatically inform the sending agencies’ staffs which records catalogues are against the transfer or destroy rules, but also can use the model to find the details hidden in the catalogues and suggest NAA’s staff whether the records should be or not to be, to shorten the auditing time. The platform keeps all the users’ browse trails; so that the platform can predict what kinds of archives user could be interested and recommend the search terms by visualization, moreover, inform them the new coming archives. In addition, according to the Archives Act, the NAA’s staff must spend a lot of time to mark or remove the personal data, classified data, etc. before archives provided. To upgrade the archives access service process, the platform will use some text recognition pattern to black out automatically, the staff only need to adjust the error and upload the correct one, when the platform has learned the accuracy will be getting higher. In short, the purpose of the platform is to deduct the government digital transformation and implement the vision of a service-oriented smart government.Keywords: artificial intelligence, natural language processing, machine learning, visualization
Procedia PDF Downloads 1816282 Visco - Plastic Transition and Transfer of Plastic Material with SGF in case of Linear Dry Friction Contact on Steel Surfaces
Authors: Lucian Capitanu, Virgil Florescu
Abstract:
Often for the laboratory studies, modeling of specific tribological processes raises special problems. One such problem is the modeling of some temperatures and extremely high contact pressures, allowing modeling of temperatures and pressures at which the injection or extrusion processing of thermoplastic materials takes place. Tribological problems occur mainly in thermoplastics materials reinforced with glass fibers. They produce an advanced wear to the barrels and screws of processing machines, in short time. Obtaining temperatures around 210 °C and higher, as well as pressures around 100 MPa is very difficult in the laboratory. This paper reports a simple and convenient solution to get these conditions, using friction sliding couples with linear contact, cylindrical liner plastic filled with glass fibers on plate steel samples, polished and super-finished. C120 steel, which is a steel for moulds and Rp3 steel, high speed steel for tools, were used. Obtaining the pressure was achieved by continuous request of the liner in rotational movement up to its elasticity limits, when the dry friction coefficient reaches or exceeds the hardness value of 0.5 HB. By dissipation of the power lost by friction on flat steel sample, are reached contact temperatures at the metal surface that reach and exceed 230 °C, being placed in the range temperature values of the injection. Contact pressures (in load and materials conditions used) ranging from 16.3-36.4 MPa were obtained depending on the plastic material used and the glass fibers content.Keywords: plastics with glass fibers, dry friction, linear contact, contact temperature, contact pressure, experimental simulation
Procedia PDF Downloads 3036281 Preparation of Carbon Nanofiber Reinforced HDPE Using Dialkylimidazolium as a Dispersing Agent: Effect on Thermal and Rheological Properties
Authors: J. Samuel, S. Al-Enezi, A. Al-Banna
Abstract:
High-density polyethylene reinforced with carbon nanofibers (HDPE/CNF) have been prepared via melt processing using dialkylimidazolium tetrafluoroborate (ionic liquid) as a dispersion agent. The prepared samples were characterized by thermogravimetric (TGA) and differential scanning calorimetric (DSC) analyses. The samples blended with imidazolium ionic liquid exhibit higher thermal stability. DSC analysis showed clear miscibility of ionic liquid in the HDPE matrix and showed single endothermic peak. The melt rheological analysis of HDPE/CNF composites was performed using an oscillatory rheometer. The influence of CNF and ionic liquid concentration (ranging from 0, 0.5, and 1 wt%) on the viscoelastic parameters was investigated at 200 °C with an angular frequency range of 0.1 to 100 rad/s. The rheological analysis shows the shear-thinning behavior for the composites. An improvement in the viscoelastic properties was observed as the nanofiber concentration increases. The progress in the modulus values was attributed to the structural rigidity imparted by the high aspect ratio CNF. The modulus values and complex viscosity of the composites increased significantly at low frequencies. Composites blended with ionic liquid exhibit slightly lower values of complex viscosity and modulus over the corresponding HDPE/CNF compositions. Therefore, reduction in melt viscosity is an additional benefit for polymer composite processing as a result of wetting effect by polymer-ionic liquid combinations.Keywords: high-density polyethylene, carbon nanofibers, ionic liquid, complex viscosity
Procedia PDF Downloads 129