Search results for: floating point unit
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7199

Search results for: floating point unit

5519 Multilayer System of Thermosetting Polymers and Specific Confining, Application to the Walls of the Hospital Unit

Authors: M. Bouzid, A. Djadi, C. Aribi, A. Irekti, B. Bezzazi, F. Halouene

Abstract:

The nature of materials structuring our health institutions promote the development of germs. The sustainability of nosocomial infections remains significant (12% and 15%). One of the major factors is the portland cement which is brittle and porous. As part of a national plan to fight nosocomial infections, led by the University Hospital of Blida, we opted for a composite coating, application by multilayer model, composed of epoxy-polyester resin as a binder and calcium carbonate as mineral fillers. The application of composite materials reinforce the wall coating of hospital units and eliminates the hospital infectious areas. The resistance to impact, chemicals, raising temperature and to a biologically active environment gives satisfactory results.

Keywords: nosocomial infection, microbial load, composite materials, portland cement

Procedia PDF Downloads 388
5518 Analysis of Reinforced Granular Pile in Soft Soil

Authors: G. Nitesh

Abstract:

Stone column or granular pile is a proven technique to mitigate settlement in soft soil. Granular pile increases both rate of consolidation and stiffness of the ground. In this paper, a method to analyze further reduction in settlement of granular column reinforced with lime pile is presented treating the system as a unit cell and considering one-dimensional compression approach. The core of the granular pile is stiffened with a steel rod or lime column. Influence of a wide range of parameters such as area ratio of granular pile-soft soil, area ratio of lime pile-granular pile, modular ratio of granular pile and modular ratio of lime pile with respect to granular pile on settlement reduction factor, etc. are obtained and presented.

Keywords: lime pile, granular pile, soft soil, settlement

Procedia PDF Downloads 408
5517 Turbulent Boundary Layer over 3D Sinusoidal Roughness

Authors: Misarah Abdelaziz, L Djenidi, Mergen H. Ghayesh, Rey Chin

Abstract:

Measurements of a turbulent boundary layer over 3D sinusoidal roughness are performed for friction Reynolds numbers ranging from 650 < Reτ < 2700. This surface was fabricated by a Multicam CNC Router machine of an acrylic sheet to have an amplitude of k/2 = 0.8 mm and an equal wavelength of 8k in both streamwise and spanwise directions, a 0.6 mm stepover and 12 mm ball nose cutter was used. Single hotwire anemometry measurements are done at one location x=1.5 m downstream at different freestream velocities under zero-pressure gradient conditions. As expected, the roughness causes a downward shift on the wall-unit normalised streamwise mean velocity profile when compared to the smooth wall profile. The shift is increasing with increasing Reτ, 1.8 < ∆U+ < 6.2. The coefficient of friction is almost constant at all cases Cf = 0.0042 ± 0.0002. The results show a gradual reduction in the inner peak of profiles with increasing Reτ until fully destruction at Reτ of 2700.

Keywords: hotwire, roughness, TBL, ZPG

Procedia PDF Downloads 217
5516 Thermal Image Segmentation Method for Stratification of Freezing Temperatures

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

The study uses an image analysis technique employing thermal imaging to measure the percentage of areas with various temperatures on a freezing surface. An image segmentation method using threshold values is applied to a sequence of image recording the freezing process. The phenomenon is transient and temperatures vary fast to reach the freezing point and complete the freezing process. Freezing salt water is subjected to the salt rejection that makes the freezing point dynamic and dependent on the salinity at the phase interface. For a specific area of freezing, nucleation starts from one side and end to another side, which causes a dynamic and transient temperature in that area. Thermal cameras are able to reveal a difference in temperature due to their sensitivity to infrared radiance. Using Experimental setup, a video is recorded by a thermal camera to monitor radiance and temperatures during the freezing process. Image processing techniques are applied to all frames to detect and classify temperatures on the surface. Image processing segmentation method is used to find contours with same temperatures on the icing surface. Each segment is obtained using the temperature range appeared in the image and correspond pixel values in the image. Using the contours extracted from image and camera parameters, stratified areas with different temperatures are calculated. To observe temperature contours on the icing surface using the thermal camera, the salt water sample is dropped on a cold surface with the temperature of -20°C. A thermal video is recorded for 2 minutes to observe the temperature field. Examining the results obtained by the method and the experimental observations verifies the accuracy and applicability of the method.

Keywords: ice contour boundary, image processing, image segmentation, salt ice, thermal image

Procedia PDF Downloads 319
5515 Heterogeneous Artifacts Construction for Software Evolution Control

Authors: Mounir Zekkaoui, Abdelhadi Fennan

Abstract:

The software evolution control requires a deep understanding of the changes and their impact on different system heterogeneous artifacts. And an understanding of descriptive knowledge of the developed software artifacts is a prerequisite condition for the success of the evolutionary process. The implementation of an evolutionary process is to make changes more or less important to many heterogeneous software artifacts such as source code, analysis and design models, unit testing, XML deployment descriptors, user guides, and others. These changes can be a source of degradation in functional, qualitative or behavioral terms of modified software. Hence the need for a unified approach for extraction and representation of different heterogeneous artifacts in order to ensure a unified and detailed description of heterogeneous software artifacts, exploitable by several software tools and allowing to responsible for the evolution of carry out the reasoning change concerned.

Keywords: heterogeneous software artifacts, software evolution control, unified approach, meta model, software architecture

Procedia PDF Downloads 442
5514 The Capabilities of New Communication Devices in Development of Informing: Case Study Mobile Functions in Iran

Authors: Mohsen Shakerinejad

Abstract:

Due to the growing momentum of technology, the present age is called age of communication and information. And With Astounding progress of Communication and information tools, current world Is likened to the "global village". That a message can be sent from one point to another point of the world in a Time scale Less than a minute. However, one of the new sociologists -Alain Touraine- in describing the destructive effects of new changes arising from the development of information appliances refers to the "new fields for undemocratic social control And the incidence of acute and unrest social and political tensions", Yet, in this era That With the advancement of the industry, the life of people has been industrial too, quickly and accurately Data Transfer, Causes Blowing new life in the Body of Society And according to the features of each society and the progress of science and technology, Various tools should be used. One of these communication tools is Mobile. Cellular phone As Communication and telecommunication revolution in recent years, Has had a great influence on the individual and collective life of societies. This powerful communication tool Have had an Undeniable effect, On all aspects of life, including social, economic, cultural, scientific, etc. so that Ignoring It in Design, Implementation and enforcement of any system is not wise. Nowadays knowledge and information are one of the most important aspects of human life. Therefore, in this article, it has been tried to introduce mobile potentials in receive and transmit News and Information. As it follows, among the numerous capabilities of current mobile phones features such as sending text, photography, sound recording, filming, and Internet connectivity could indicate the potential of this medium of communication in the process of sending and receiving information. So that nowadays, mobile journalism as an important component of citizen journalism Has a unique role in information dissemination.

Keywords: mobile, informing, receiving information, mobile journalism, citizen journalism

Procedia PDF Downloads 409
5513 Case Study of Ground Improvement Solution for a Power Plant

Authors: Eleonora Di Mario

Abstract:

This paper describes the application of ground improvement to replace a typical piled foundation scheme in a power plant in Singapore. Several buildings within the plant were founded on vibro-compacted sand, including a turbine unit which had extremely stringent requirements on the allowable settlement. The achieved savings in terms of cost and schedule are presented. The monitoring data collected during the operation of the turbine are compared to the design predictions to validate the design approach, and the quality of the ground improvement works. In addition, the calculated carbon footprint of the ground improvement works are compared to the piled solution, showing that the vibro-compaction has a significantly lower carbon footprint.

Keywords: ground improvement, vibro-compaction, case study, sustainability, carbon footprint

Procedia PDF Downloads 107
5512 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling

Authors: M. Almutairi, S. Hadjiloucas

Abstract:

The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.

Keywords: harmonics, passive filter, power factor, power quality

Procedia PDF Downloads 305
5511 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: customer value, Huff's Gravity Model, POS, Retailer

Procedia PDF Downloads 121
5510 Evaluation Method for Fouling Risk Using Quartz Crystal Microbalance

Authors: Natsuki Kishizawa, Keiko Nakano, Hussam Organji, Amer Shaiban, Mohammad Albeirutty

Abstract:

One of the most important tasks in operating desalination plants using a reverse osmosis (RO) method is preventing RO membrane fouling caused by foulants found in seawater. Optimal design of the pre-treatment process of RO process for plants enables the reduction of foulants. Therefore, a quantitative evaluation of the fouling risk in pre-treated water, which is fed to RO, is required for optimal design. Some measurement methods for water quality such as silt density index (SDI) and total organic carbon (TOC) have been conservatively applied for evaluations. However, these methods have not been effective in some situations for evaluating the fouling risk of RO feed water. Furthermore, stable management of plants will be possible by alerts and appropriate control of the pre-treatment process by using the method if it can be applied to the inline monitoring system for the fouling risk of RO feed water. The purpose of this study is to develop a method to evaluate the fouling risk of RO feed water. We applied a quartz crystal microbalance (QCM) to measure the amount of foulants found in seawater using a sensor whose surface is coated with polyamide thin film, which is the main material of a RO membrane. The increase of the weight of the sensor after a certain length of time in which the sample water passes indicates the fouling risk of the sample directly. We classified the values as “FP: Fouling Potential”. The characteristics of the method are to measure the very small amount of substances in seawater in a short time: < 2h, and from a small volume of the sample water: < 50mL. Using some RO cell filtration units, a higher correlation between the pressure increase given by RO fouling and the FP from the method than SDI and TOC was confirmed in the laboratory-scale test. Then, to establish the correlation in the actual bench-scale RO membrane module, and to confirm the feasibility of the monitoring system as a control tool for the pre-treatment process, we have started a long-term test at an experimental desalination site by the Red Sea in Jeddah, Kingdom of Saudi Arabia. Implementing inline equipment for the method made it possible to measure FP intermittently (4 times per day) and automatically. Moreover, for two 3-month long operations, the RO operation pressure among feed water samples of different qualities was compared. The pressure increase through a RO membrane module was observed at a high FP RO unit in which feed water was treated by a cartridge filter only. On the other hand, the pressure increase was not observed at a low FP RO unit in which feed water was treated by an ultra-filter during the operation. Therefore, the correlation in an actual scale RO membrane was established in two runs of two types of feed water. The result suggested that the FP method enables the evaluation of the fouling risk of RO feed water.

Keywords: fouling, monitoring, QCM, water quality

Procedia PDF Downloads 211
5509 A Molding Surface Auto-inspection System

Authors: Ssu-Han Chen, Der-Baau Perng

Abstract:

Molding process in IC manufacturing secures chips against the harms done by hot, moisture or other external forces. While a chip was being molded, defects like cracks, dilapidation, or voids may be embedding on the molding surface. The molding surfaces the study poises to treat and the ones on the market, though, differ in the surface where texture similar to defects is everywhere. Manual inspection usually passes over low-contrast cracks or voids; hence an automatic optical inspection system for molding surface is necessary. The proposed system is consisted of a CCD, a coaxial light, a back light as well as a motion control unit. Based on the property of statistical textures of the molding surface, a series of digital image processing and classification procedure is carried out. After training of the parameter associated with above algorithm, result of the experiment suggests that the accuracy rate is up to 93.75%, contributing to the inspection quality of IC molding surface.

Keywords: molding surface, machine vision, statistical texture, discrete Fourier transformation

Procedia PDF Downloads 430
5508 Imaging 255nm Tungsten Thin Film Adhesion with Picosecond Ultrasonics

Authors: A. Abbas, X. Tridon, J. Michelon

Abstract:

In the electronic or in the photovoltaic industries, components are made from wafers which are stacks of thin film layers of a few nanometers to serval micrometers thickness. Early evaluation of the bounding quality between different layers of a wafer is one of the challenges of these industries to avoid dysfunction of their final products. Traditional pump-probe experiments, which have been developed in the 70’s, give a partial solution to this problematic but with a non-negligible drawback. In fact, on one hand, these setups can generate and detect ultra-high ultrasounds frequencies which can be used to evaluate the adhesion quality of wafer layers. But, on the other hand, because of the quiet long acquisition time they need to perform one measurement, these setups remain shut in punctual measurement to evaluate global sample quality. This last point can lead to bad interpretation of the sample quality parameters, especially in the case of inhomogeneous samples. Asynchronous Optical Sampling (ASOPS) systems can perform sample characterization with picosecond acoustics up to 106 times faster than traditional pump-probe setups. This last point allows picosecond ultrasonic to unlock the acoustic imaging field at the nanometric scale to detect inhomogeneities regarding sample mechanical properties. This fact will be illustrated by presenting an image of the measured acoustical reflection coefficients obtained by mapping, with an ASOPS setup, a 255nm thin-film tungsten layer deposited on a silicone substrate. Interpretation of the coefficient reflection in terms of bounding quality adhesion will also be exposed. Origin of zones which exhibit good and bad quality bounding will be discussed.

Keywords: adhesion, picosecond ultrasonics, pump-probe, thin film

Procedia PDF Downloads 158
5507 Detection of the Effectiveness of Training Courses and Their Limitations Using CIPP Model (Case Study: Isfahan Oil Refinery)

Authors: Neda Zamani

Abstract:

The present study aimed to investigate the effectiveness of training courses and their limitations using the CIPP model. The investigations were done on Isfahan Refinery as a case study. From a purpose point of view, the present paper is included among applied research and from a data gathering point of view, it is included among descriptive research of the field type survey. The population of the study included participants in training courses, their supervisors and experts of the training department. Probability-proportional-to-size (PPS) was used as the sampling method. The sample size for participants in training courses included 195 individuals, 30 supervisors and 11 individuals from the training experts’ group. To collect data, a questionnaire designed by the researcher and a semi-structured interview was used. The content validity of the data was confirmed by training management experts and the reliability was calculated through 0.92 Cronbach’s alpha. To analyze the data in descriptive statistics aspect (tables, frequency, frequency percentage and mean) were applied, and inferential statistics (Mann Whitney and Wilcoxon tests, Kruskal-Wallis test to determine the significance of the opinion of the groups) have been applied. Results of the study indicated that all groups, i.e., participants, supervisors and training experts, absolutely believe in the importance of training courses; however, participants in training courses regard content, teacher, atmosphere and facilities, training process, managing process and product as to be in a relatively appropriate level. The supervisors also regard output to be at a relatively appropriate level, but training experts regard content, teacher and managing processes as to be in an appropriate and higher than average level.

Keywords: training courses, limitations of training effectiveness, CIPP model, Isfahan oil refinery company

Procedia PDF Downloads 74
5506 An Experimental Approach to the Influence of Tipping Points and Scientific Uncertainties in the Success of International Fisheries Management

Authors: Jules Selles

Abstract:

The Atlantic and Mediterranean bluefin tuna fishery have been considered as the archetype of an overfished and mismanaged fishery. This crisis has demonstrated the role of public awareness and the importance of the interactions between science and management about scientific uncertainties. This work aims at investigating the policy making process associated with a regional fisheries management organization. We propose a contextualized computer-based experimental approach, in order to explore the effects of key factors on the cooperation process in a complex straddling stock management setting. Namely, we analyze the effects of the introduction of a socio-economic tipping point and the uncertainty surrounding the estimation of the resource level. Our approach is based on a Gordon-Schaefer bio-economic model which explicitly represents the decision making process. Each participant plays the role of a stakeholder of ICCAT and represents a coalition of fishing nations involved in the fishery and decide unilaterally a harvest policy for the coming year. The context of the experiment induces the incentives for exploitation and collaboration to achieve common sustainable harvest plans at the Atlantic bluefin tuna stock scale. Our rigorous framework allows testing how stakeholders who plan the exploitation of a fish stock (a common pool resource) respond to two kinds of effects: i) the inclusion of a drastic shift in the management constraints (beyond a socio-economic tipping point) and ii) an increasing uncertainty in the scientific estimation of the resource level.

Keywords: economic experiment, fisheries management, game theory, policy making, Atlantic Bluefin tuna

Procedia PDF Downloads 253
5505 Mobile Network Users Amidst Ultra-Dense Networks in 5G Using an Improved Coordinated Multipoint (CoMP) Technology

Authors: Johnson O. Adeogo, Ayodele S. Oluwole, O. Akinsanmi, Olawale J. Olaluyi

Abstract:

In this 5G network, very high traffic density in densely populated areas, most especially in densely populated areas, is one of the key requirements. Radiation reduction becomes one of the major concerns to secure the future life of mobile network users in ultra-dense network areas using an improved coordinated multipoint technology. Coordinated Multi-Point (CoMP) is based on transmission and/or reception at multiple separated points with improved coordination among them to actively manage the interference for the users. Small cells have two major objectives: one, they provide good coverage and/or performance. Network users can maintain a good quality signal network by directly connecting to the cell. Two is using CoMP, which involves the use of multiple base stations (MBS) to cooperate by transmitting and/or receiving at the same time in order to reduce the possibility of electromagnetic radiation increase. Therefore, the influence of the screen guard with rubber condom on the mobile transceivers as one major piece of equipment radiating electromagnetic radiation was investigated by mobile network users amidst ultra-dense networks in 5g. The results were compared with the same mobile transceivers without screen guards and rubber condoms under the same network conditions. The 5 cm distance from the mobile transceivers was measured with the help of a ruler, and the intensity of Radio Frequency (RF) radiation was measured using an RF meter. The results show that the intensity of radiation from various mobile transceivers without screen guides and condoms was higher than the mobile transceivers with screen guides and condoms when call conversation was on at both ends.

Keywords: ultra-dense networks, mobile network users, 5g, coordinated multi-point.

Procedia PDF Downloads 103
5504 Classification on Statistical Distributions of a Complex N-Body System

Authors: David C. Ni

Abstract:

Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.

Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification

Procedia PDF Downloads 309
5503 Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers

Authors: Liang Ming Zhong, Yu Zhong, Wen Zhong, Fei Fei Yin

Abstract:

Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)

Keywords: trinary affinity, difference, similarity, realistic zero

Procedia PDF Downloads 209
5502 A Photovoltaic Micro-Storage System for Residential Applications

Authors: Alia Al Nuaimi, Ayesha Al Aberi, Faiza Al Marzouqi, Shaikha Salem Ali Al Yahyaee, Ala Hussein

Abstract:

In this paper, a PV micro-storage system for residential applications is proposed. The term micro refers to the size of the PV storage system, which is in the range of few kilo-watts, compared to the grid size (~GWs). Usually, in a typical load profile of a residential unit, two peak demand periods exist: one at morning and the other at evening time. The morning peak can be partly covered by the PV energy directly, while the evening peak cannot be covered by the PV alone. Therefore, an energy storage system that stores solar energy during daytime and use this stored energy when the sun is absent is a must. A complete design procedure including theoretical analysis followed by simulation verification and economic feasibility evaluation is addressed in this paper.

Keywords: battery, energy storage, photovoltaic, peak shaving, smart grid

Procedia PDF Downloads 319
5501 A Handheld Light Meter Device for Methamphetamine Detection in Oral Fluid

Authors: Anindita Sen

Abstract:

Oral fluid is a promising diagnostic matrix for drugs of abuse compared to urine and serum. Detection of methamphetamine in oral fluid would pave way for the easy evaluation of impairment in drivers during roadside drug testing as well as ensure safe working environments by facilitating evaluation of impairment in employees at workplaces. A membrane-based point-of-care (POC) friendly pre-treatment technique has been developed which aided elimination of interferences caused by salivary proteins and facilitated the demonstration of methamphetamine detection in saliva using a gold nanoparticle based colorimetric aptasensor platform. It was found that the colorimetric response in saliva was always suppressed owing to the matrix effects. By navigating the challenging interfering issues in saliva, we were successfully able to detect methamphetamine at nanomolar levels in saliva offering immense promise for the translation of these platforms for on-site diagnostic systems. This subsequently motivated the development of a handheld portable light meter device that can reliably transduce the aptasensors colorimetric response into absorbance, facilitating quantitative detection of analyte concentrations on-site. This is crucial due to the prevalent unreliability and sensitivity problems of the conventional drug testing kits. The fabricated light meter device response was validated against a standard UV-Vis spectrometer to confirm reliability. The portable and cost-effective handheld detector device features sensitivity comparable to the well-established UV-Vis benchtop instrument and the easy-to-use device could potentially serve as a prototype for a commercial device in the future.

Keywords: aptasensors, colorimetric gold nanoparticle assay, point-of-care, oral fluid

Procedia PDF Downloads 55
5500 Ferromagnetic Potts Models with Multi Site Interaction

Authors: Nir Schreiber, Reuven Cohen, Simi Haber

Abstract:

The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.

Keywords: entropic sampling, lattice animals, phase transitions, Potts model

Procedia PDF Downloads 157
5499 Polyacrylate Modified Copper Nanoparticles with Controlled Size

Authors: Robert Prucek, Aleš Panáček, Jan Filip, Libor Kvítek, Radek Zbořil

Abstract:

The preparation of Cu nanoparticles (NPs) through the reduction of copper ions by sodium borohydride in the presence of sodium polyacrylate with a molecular weight of 1200 is reported. Cu NPs were synthesized at a concentration of copper salt equal to 2.5, 5, and 10 mM, and at a molar ratio of copper ions and monomeric unit of polyacrylate equal to 1:2. The as-prepared Cu NPs have diameters of about 2.5–3 nm for copper concentrations of 2.5 and 5 mM, and 6 nm for copper concentration of 10 mM. Depending on the copper salt concentration and concentration of additionally added polyacrylate to Cu particle dispersion, primarily formed NPs grow through the process of aggregation and/or coalescence into clusters and/or particles with a diameter between 20–100 nm. The amount of additionally added sodium polyacrylate influences the stability of Cu particles against air oxidation. The catalytic efficiency of the prepared Cu particles for the reduction of 4-nitrophenol is discussed.

Keywords: copper, nanoparticles, sodium polyacrylate, catalyst, 4-nitrophenol

Procedia PDF Downloads 276
5498 Analyzing the Empirical Link between Islamic Finance and Growth of Real Output: A Time Series Application to Pakistan

Authors: Nazima Ellahi, Danish Ramzan

Abstract:

There is a growing trend among development economists regarding the importance of financial sector for economic development and growth activities. The development thus introduced, helps to promote welfare effects and poverty alleviation. This study is an attempt to find the nature of link between Islamic banking financing and development of output growth for Pakistan. Time series data set has been utilized for a time period ranging from 1990 to 2010. Following the Phillip Perron (PP) and Augmented Dicky Fuller (ADF) test of unit root this study applied Ordinary Least Squares (OLS) method of estimation and found encouraging results in favor of promoting the Islamic banking practices in Pakistan.

Keywords: Islamic finance, poverty alleviation, economic growth, finance, commerce

Procedia PDF Downloads 342
5497 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

Authors: Bassam Istanbouli

Abstract:

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Keywords: blueprint, ERP, modular, normalized

Procedia PDF Downloads 139
5496 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census

Authors: Jaroslav Kraus

Abstract:

Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.

Keywords: census, geo-demography, households, the Czech Republic

Procedia PDF Downloads 95
5495 The Design, Development, and Optimization of a Capacitive Pressure Sensor Utilizing an Existing 9DOF Platform

Authors: Andrew Randles, Ilker Ocak, Cheam Daw Don, Navab Singh, Alex Gu

Abstract:

Nine Degrees of Freedom (9 DOF) systems are already in development in many areas. In this paper, an integrated pressure sensor is proposed that will make use of an already existing monolithic 9 DOF inertial MEMS platform. Capacitive pressure sensors can suffer from limited sensitivity for a given size of membrane. This novel pressure sensor design increases the sensitivity by over 5 times compared to a traditional array of square diaphragms while still fitting within a 2 mm x 2 mm chip and maintaining a fixed static capacitance. The improved design uses one large diaphragm supported by pillars with fixed electrodes placed above the areas of maximum deflection. The design optimization increases the sensitivity from 0.22 fF/kPa to 1.16 fF/kPa. Temperature sensitivity was also examined through simulation.

Keywords: capacitive pressure sensor, 9 DOF, 10 DOF, sensor, capacitive, inertial measurement unit, IMU, inertial navigation system, INS

Procedia PDF Downloads 544
5494 World Peace and Conflict Resolution: A Solution from a Buddhist Point of View

Authors: Samitharathana R. Wadigala

Abstract:

The peace will not be established until the self-consciousness would reveal in the human beings. In this nuclear age, the establishment of a lasting peace on the earth represents the primary condition for the preservation of human civilization and survival of human beings. Nothing perhaps is so important and indispensable as the achievement and maintenance of peace in the modern world today. Peace in today’s world implies much more than the mere absence of war and violence. In the interdependent world of today the United Nations needs to be representative of the modern world and democratic in its functioning because it came into existence to save the generations from the scourge of war and conflict. Buddhism is the religion of peaceful co-existence and philosophy of enlightenment. Violence and conflict from the perspective of the Buddhist theory of interdependent origination (Paṭiccasamuppāda) are same with everything else in the world a product of causes and conditions. Buddhism is totally compatible with the congenial and peaceful global order. The canonical literature, doctrines, and philosophy of Buddhism are the best suited for inter-faith dialogue, harmony, and universal peace. Even today Buddhism can resurrect the universal brotherhood, peaceful co-existence and harmonious surroundings in the comity of nations. With its increasing vitality in regions around the world, many people today turn to Buddhism for relief and guidance at the time when peace seems to be a deferred dream more than ever. From a Buddhist point of view the roots of all unwholesome actions (Conflict) i. e. greed, hatred and delusion are viewed as the root cause of all human conflicts. Conflict often emanates from attachment to material things: pleasures, property, territory, wealth, economic dominance or political superiority. Buddhism has some particularly rich resources for deployment in dissolving conflict. Buddhism addresses the Buddhist perspective on the causes of conflict and ways to resolve conflict to realize world peace. The world has enough to satisfy every body’s needs but not every body’s greed.

Keywords: Buddhism, conflict-violence, peace, self-consciousness

Procedia PDF Downloads 208
5493 Design of Single Phase Smart Energy Meter and Grid Tied Inverter for Smart Grid

Authors: Hamza Arif, Haroon Javaid

Abstract:

Based on hybrid energy concept of smart grid to synchronize and monitor power being generated at the user end. The ATMEGA328p controller of arduino is used as a processor unit that sends wireless data between user and power utility through NRF24L01 wireless modules. Current and potential transformer circuit are designed to sense the voltage and current at the utility and power being generated at the user end through solar panel. They are designed to interface with the arduino. The approach is used to demonstrate the concept of smart grid and to facilitate for further advancements in the field of smart grid technology. A PWM (Pulse Width Modulation) technique is used to synchronize the user output power with the utility supplier.

Keywords: smart grid, hybrid energy, grid tied inverter, PWM

Procedia PDF Downloads 19
5492 Demand Forecasting to Reduce Dead Stock and Loss Sales: A Case Study of the Wholesale Electric Equipment and Part Company

Authors: Korpapa Srisamai, Pawee Siriruk

Abstract:

The purpose of this study is to forecast product demands and develop appropriate and adequate procurement plans to meet customer needs and reduce costs. When the product exceeds customer demands or does not move, it requires the company to support insufficient storage spaces. Moreover, some items, when stored for a long period of time, cause deterioration to dead stock. A case study of the wholesale company of electronic equipment and components, which has uncertain customer demands, is considered. The actual purchasing orders of customers are not equal to the forecast provided by the customers. In some cases, customers have higher product demands, resulting in the product being insufficient to meet the customer's needs. However, some customers have lower demands for products than estimates, causing insufficient storage spaces and dead stock. This study aims to reduce the loss of sales opportunities and the number of remaining goods in the warehouse, citing 30 product samples of the company's most popular products. The data were collected during the duration of the study from January to October 2022. The methods used to forecast are simple moving averages, weighted moving average, and exponential smoothing methods. The economic ordering quantity and reorder point are used to calculate to meet customer needs and track results. The research results are very beneficial to the company. The company can reduce the loss of sales opportunities by 20% so that the company has enough products to meet customer needs and can reduce unused products by up to 10% dead stock. This enables the company to order products more accurately, increasing profits and storage space.

Keywords: demand forecast, reorder point, lost sale, dead stock

Procedia PDF Downloads 118
5491 Applications for Accounting of Inherited Object-Oriented Class Members

Authors: Jehad Al Dallal

Abstract:

A class in an Object-Oriented (OO) system is the basic unit of design, and it encapsulates a set of attributes and methods. In OO systems, instead of redefining the attributes and methods that are included in other classes, a class can inherit these attributes and methods and only implement its unique attributes and methods, which results in reducing code redundancy and improving code testability and maintainability. Such mechanism is called Class Inheritance. However, some software engineering applications may require accounting for all the inherited class members (i.e., attributes and methods). This paper explains how to account for inherited class members and discusses the software engineering applications that require such consideration.

Keywords: class flattening, external quality attribute, inheritance, internal quality attribute, object-oriented design

Procedia PDF Downloads 268
5490 Electric Load Forecasting Based on Artificial Neural Network for Iraqi Power System

Authors: Afaneen Anwer, Samara M. Kamil

Abstract:

Load Forecast required prediction accuracy based on optimal operation and maintenance. A good accuracy is the basis of economic dispatch, unit commitment, and system reliability. A good load forecasting system fulfilled fast speed, automatic bad data detection, and ability to access the system automatically to get the needed data. In this paper, the formulation of the load forecasting is discussed and the solution is obtained by using artificial neural network method. A MATLAB environment has been used to solve the load forecasting schedule of Iraqi super grid network considering the daily load for three years. The obtained results showed a good accuracy in predicting the forecasted load.

Keywords: load forecasting, neural network, back-propagation algorithm, Iraqi power system

Procedia PDF Downloads 580