Search results for: spatial time series
19404 Effect of Precursors Aging Time on the Photocatalytic Activity of Zno Thin Films
Authors: N. Kaneva, A. Bojinova, K. Papazova
Abstract:
Thin ZnO films are deposited on glass substrates via sol–gel method and dip-coating. The films are prepared from zinc acetate dehydrate as a starting reagent. After that the as-prepared ZnO sol is aged for different periods (0, 1, 3, 5, 10, 15, and 30 days). Nanocrystalline thin films are deposited from various sols. The effect ZnO sols aging time on the structural and photocatalytic properties of the films is studied. The films surface is studied by Scanning Electron Microscopy. The effect of the aging time of the starting solution is studied inrespect to photocatalytic degradation of Reactive Black 5 (RB5) by UV-vis spectroscopy. The experiments are conducted upon UV-light illumination and in complete darkness. The variation of the absorption spectra shows the degradation of RB5 dissolved in water, as a result of the reaction acurring on the surface of the films, and promoted by UV irradiation. The initial concentrations of dye (5, 10 and 20 ppm) and the effect of the aging time are varied during the experiments. The results show, that the increasing aging time of starting solution with respect to ZnO generally promotes photocatalytic activity. The thin films obtained from ZnO sol, which is aged 30 days have best photocatalytic degradation of the dye (97,22%) in comparison with the freshly prepared ones (65,92%). The samples and photocatalytic experimental results are reproducible. Nevertheless, all films exhibit a substantial activity in both UV light and darkness, which is promising for the development of new ZnO photocatalysts by sol-gel method.Keywords: ZnO thin films, sol-gel, photocatalysis, aging time
Procedia PDF Downloads 38419403 Spectroscopic Characterization Approach to Study Ablation Time on Zinc Oxide Nanoparticles Synthesis by Laser Ablation Technique
Authors: Suha I. Al-Nassar, K. M. Adel, F. Zainab
Abstract:
This work was devoted for producing ZnO nanoparticles by pulsed laser ablation (PLA) of Zn metal plate in the aqueous environment of cetyl trimethyl ammonium bromide (CTAB) using Q-Switched Nd:YAG pulsed laser with wavelength= 1064 nm, Rep. rate= 10 Hz, Pulse duration= 6 ns and laser energy 50 mJ. Solution of nanoparticles is found stable in the colloidal form for a long time. The effect of ablation time on the optical and structure of ZnO was studied is characterized by UV-visible absorption. UV-visible absorption spectrum has four peaks at 256, 259, 265, 322 nm for ablation time (5, 10, 15, and 20 sec) respectively, our results show that UV–vis spectra show a blue shift in the presence of CTAB with decrease the ablation time and blue shift indicated to get smaller size of nanoparticles. The blue shift in the absorption edge indicates the quantum confinement property of nanoparticles. Also, FTIR transmittance spectra of ZnO2 nanoparticles prepared in these states show a characteristic ZnO absorption at 435–445cm^−1.Keywords: zinc oxide nanoparticles, CTAB solution, pulsed laser ablation technique, spectroscopic characterization
Procedia PDF Downloads 38219402 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector
Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar
Abstract:
Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.Keywords: appliances efficiency improvement, energy star, market penetration, residential sector
Procedia PDF Downloads 29219401 Analysis of a Discrete-time Geo/G/1 Queue Integrated with (s, Q) Inventory Policy at a Service Facility
Authors: Akash Verma, Sujit Kumar Samanta
Abstract:
This study examines a discrete-time Geo/G/1 queueing-inventory system attached with (s, Q) inventory policy. Assume that the customers follow the Bernoulli process on arrival. Each customer demands a single item with arbitrarily distributed service time. The inventory is replenished by an outside supplier, and the lead time for the replenishment is determined by a geometric distribution. There is a single server and infinite waiting space in this facility. Demands must wait in the specified waiting area during a stock-out period. The customers are served on a first-come-first-served basis. With the help of the embedded Markov chain technique, we determine the joint probability distributions of the number of customers in the system and the number of items in stock at the post-departure epoch using the Matrix Analytic approach. We relate the system length distribution at post-departure and outside observer's epochs to determine the joint probability distribution at the outside observer's epoch. We use probability distributions at random epochs to determine the waiting time distribution. We obtain the performance measures to construct the cost function. The optimum values of the order quantity and reordering point are found numerically for the variety of model parameters.Keywords: discrete-time queueing inventory model, matrix analytic method, waiting-time analysis, cost optimization
Procedia PDF Downloads 5219400 Cd1−xMnxSe Thin Films Preparation by Cbd: Aspect on Optical and Electrical Properties
Authors: Jaiprakash Dargad
Abstract:
CdMnSe dilute semiconductor or semimagnetic semiconductors have become the focus of intense research due to their interesting combination of magnetic and semiconducting properties, and are employed in a variety of devices including solar cells, gas sensors etc. A series of thin films of this material, Cd1−xMnxSe (0 ≤ x ≤ 0.5), were therefore synthesized onto precleaned amorphous glass substrates using a solution growth technique. The sources of cadmium (Cd2+) and manganese (Mn2+) were aqueous solutions of cadmium sulphate and manganese sulphate, and selenium (Se2−) was extracted from a reflux of sodium selenosulphite. The different deposition parameters such as temperature, time of deposition, speed of mechanical churning, pH of the reaction mixture etc were optimized to yield good quality deposits. The as-grown samples were thin, relatively uniform, smooth and tightly adherent to the substrate support. The colour of the deposits changed from deep red-orange to yellowish-orange as the composition parameter, x, was varied from 0 to 0.5. The terminal layer thickness decreased with increasing value of, x. The optical energy gap decreased from 1.84 eV to 1.34 eV for the change of x from 0 to 0.5. The coefficient of optical absorption is of the order of 10-4 - 10-5 cm−1 and the type of transition (m = 0.5) is of the band-to-band direct type. The dc electrical conductivities were measured at room temperature and in the temperature range 300 K - 500 K. It was observed that the room temperature electrical conductivity increased with the composition parameter x up to 0.1, gradually decreasing thereafter. The thermo power measurements showed n-type conduction in these films.Keywords: dilute semiconductor, reflux, CBD, thin film
Procedia PDF Downloads 23519399 FPGA Implementation of Adaptive Clock Recovery for TDMoIP Systems
Authors: Semih Demir, Anil Celebi
Abstract:
Circuit switched networks widely used until the end of the 20th century have been transformed into packages switched networks. Time Division Multiplexing over Internet Protocol (TDMoIP) is a system that enables Time Division Multiplexing (TDM) traffic to be carried over packet switched networks (PSN). In TDMoIP systems, devices that send TDM data to the PSN and receive it from the network must operate with the same clock frequency. In this study, it was aimed to implement clock synchronization process in Field Programmable Gate Array (FPGA) chips using time information attached to the packages received from PSN. The designed hardware is verified using the datasets obtained for the different carrier types and comparing the results with the software model. Field tests are also performed by using the real time TDMoIP system.Keywords: clock recovery on TDMoIP, FPGA, MATLAB reference model, clock synchronization
Procedia PDF Downloads 28219398 Affectivity of Smoked Edible Sachet in Preventing Oxidation of Natural Condiment Stored in Ambient Temperature
Authors: Feny Mentang, Roike Iwan Montolalu, Henny Adeleida Dien, Kristhina P. Rahael, Tomy Moga, Ayub Meko, Siegfried Berhimpon
Abstract:
Smoked fish is one of the famous fish products in North Sulawesi, Indonesia. Research in producing smoked fish using smoke liquid, and the use of that product as main taste for a new “natural condiment” have been done, including a series of researches to find materials for sachet. Research aims are to determine the effectiveness of smoked edible sachets, in preventing oxidation of natural condiment, stored in ambient temperature. Two kinds of natural condiment flavors were used, i.e. smoked Skipjack flavor, and Sea Food flavor. Three variables of edible sachets were used for the natural condiments, i.e. non-sachet, edible sachet without smoke liquid, and edible sachet with smoke liquid. The natural condiments were then stored in ambient temperature, for 0, 10, 20, and 30 days. To determine the effectiveness of edible sachets in preventing oxidation, analysis of TBA, water content, and pH were conducted. The results shown that natural condiment with smoked seafood taste had TBA values higher than that of smoked Skipjack. Edible sachet gave a highly significant effect (P > 0.01) on TBA. Natural condiment in smoked edible sachet has a lower TBA than natural condiment non-sachet, and with sachet without smoke liquid. The longer storing time, the higher TBA, especially for non-sachet and with sachet without smoke liquid. There were no significant effect (P > 0.05) of edible sachet on water content and pH.Keywords: edible sachet, smoke liquid, natural condiment, oxidation
Procedia PDF Downloads 51719397 The Effect of a 12 Week Rhythmic Movement Intervention on Selected Biomotor Abilities on Academy Rugby Players
Authors: Jocelyn Solomons, Kraak
Abstract:
Rhythmic movement, also referred to as “dance”, involves the execution of different motor skills as well as the integration and sequencing of actions between limbs, timing and spatial precision. The aim of this study was therefore to investigate and compare the effect of a 16-week rhythmic movement intervention on flexibility, dynamic balance, agility, power and local muscular endurance of academy rugby players in the Western Cape, according to positional groups. Players (N ¼ 54) (age 18.66 0.81 years; height 1.76 0.69 cm; weight 76.77 10.69 kg), were randomly divided into a treatment-control [TCA] (n ¼ 28) and a control-treatment [CTB] (n ¼ 26) group. In this crossover experimental design, the interaction effect of the treatment order and the treatment time between the TCA and CTB group, was determined. Results indicated a statistically significant improvement (p < 0.05) in agility2 (p ¼ 0.06), power2 (p ¼ 0.05), local muscular endurance1 (p ¼ 0.01) & 3 (p ¼ 0.01) and dynamic balance (p < 0.01). Likewise, forwards and backs also showed statistically significant improvements (p < 0.05) per positional groups. Therefore, a rhythmic movement intervention has the potential to improve rugby-specific bio-motor skills and furthermore, improve positional specific skills should it be designed with positional groups in mind. Future studies should investigate, not only the effect of rhythmic movement on improving specific rugby bio-motor skills, but the potential of its application as an alternative training method during off- season (or detraining phases) or as a recovery method.Keywords: agility, dance, dynamic balance, flexibility, local muscular endurance, power, training
Procedia PDF Downloads 6519396 Application of IoTs Based Multi-Level Air Quality Sensing for Advancing Environmental Monitoring in Pingtung County
Authors: Men An Pan, Hong Ren Chen, Chih Heng Shih, Hsing Yuan Yen
Abstract:
Pingtung County is located in the southernmost region of Taiwan. During the winter season, pollutants due to insufficient dispersion caused by the downwash of the northeast monsoon lead to the poor air quality of the County. Through the implementation of various control methods, including the application of permits of air pollution, fee collection of air pollution, control oil fume of catering sectors, smoke detection of diesel vehicles, regular inspection of locomotives, and subsidies for low-polluting vehicles. Moreover, to further mitigate the air pollution, additional alternative controlling strategies are also carried out, such as construction site control, prohibition of open-air agricultural waste burning, improvement of river dust, and strengthening of road cleaning operations. The combined efforts have significantly reduced air pollutants in the County. However, in order to effectively and promptly monitor the ambient air quality, the County has subsequently deployed micro-sensors, with a total of 400 IoTs (Internet of Things) micro-sensors for PM2.5 and VOC detection and 3 air quality monitoring stations of the Environmental Protection Agency (EPA), covering 33 townships of the County. The covered area has more than 1,300 listed factories and 5 major industrial parks; thus forming an Internet of Things (IoTs) based multi-level air quality monitoring system. The results demonstrate that the IoTs multi-level air quality sensors combined with other strategies such as “sand and gravel dredging area technology monitoring”, “banning open burning”, “intelligent management of construction sites”, “real-time notification of activation response”, “nighthawk early bird plan with micro-sensors”, “unmanned aircraft (UAV) combined with land and air to monitor abnormal emissions”, and “animal husbandry odour detection service” etc. The satisfaction improvement rate of air control, through a 2021 public survey, reached a high percentage of 81%, an increase of 46% as compared to 2018. For the air pollution complaints for the whole year of 2021, the total number was 4213 in contrast to 7088 in 2020, a reduction rate reached almost 41%. Because of the spatial-temporal features of the air quality monitoring IoTs system by the application of microsensors, the system does assist and strengthen the effectiveness of the existing air quality monitoring network of the EPA and can provide real-time control of the air quality. Therefore, the hot spots and potential pollution locations can be timely determined for law enforcement. Hence, remarkable results were obtained for the two years. That is, both reduction of public complaints and better air quality are successfully achieved through the implementation of the present IoTs system for real-time air quality monitoring throughout Pingtung County.Keywords: IoT, PM, air quality sensor, air pollution, environmental monitoring
Procedia PDF Downloads 7719395 Numerical Investigation of Dynamic Stall over a Wind Turbine Pitching Airfoil by Using OpenFOAM
Authors: Mahbod Seyednia, Shidvash Vakilipour, Mehran Masdari
Abstract:
Computations for two-dimensional flow past a stationary and harmonically pitching wind turbine airfoil at a moderate value of Reynolds number (400000) are carried out by progressively increasing the angle of attack for stationary airfoil and at fixed pitching frequencies for rotary one. The incompressible Navier-Stokes equations in conjunction with Unsteady Reynolds Average Navier-Stokes (URANS) equations for turbulence modeling are solved by OpenFOAM package to investigate the aerodynamic phenomena occurred at stationary and pitching conditions on a NACA 6-series wind turbine airfoil. The aim of this study is to enhance the accuracy of numerical simulation in predicting the aerodynamic behavior of an oscillating airfoil in OpenFOAM. Hence, for turbulence modelling, k-ω-SST with low-Reynolds correction is employed to capture the unsteady phenomena occurred in stationary and oscillating motion of the airfoil. Using aerodynamic and pressure coefficients along with flow patterns, the unsteady aerodynamics at pre-, near-, and post-static stall regions are analyzed in harmonically pitching airfoil, and the results are validated with the corresponding experimental data possessed by the authors. The results indicate that implementing the mentioned turbulence model leads to accurate prediction of the angle of static stall for stationary airfoil and flow separation, dynamic stall phenomenon, and reattachment of the flow on the surface of airfoil for pitching one. Due to the geometry of the studied 6-series airfoil, the vortex on the upper surface of the airfoil during upstrokes is formed at the trailing edge. Therefore, the pattern flow obtained by our numerical simulations represents the formation and change of the trailing-edge vortex at near- and post-stall regions where this process determines the dynamic stall phenomenon.Keywords: CFD, moderate Reynolds number, OpenFOAM, pitching oscillation, unsteady aerodynamics, wind turbine
Procedia PDF Downloads 21019394 Modeling Waiting and Service Time for Patients: A Case Study of Matawale Health Centre, Zomba, Malawi
Authors: Moses Aron, Elias Mwakilama, Jimmy Namangale
Abstract:
Spending more time on long queues for a basic service remains a common challenge to most developing countries, including Malawi. For health sector in particular, Out-Patient Department (OPD) experiences long queues. This puts the lives of patients at risk. However, using queuing analysis to under the nature of the problems and efficiency of service systems, such problems can be abated. Based on a kind of service, literature proposes different possible queuing models. However, unlike using generalized assumed models proposed by literature, use of real time case study data can help in deeper understanding the particular problem model and how such a model can vary from one day to the other and also from each case to another. As such, this study uses data obtained from one urban HC for BP, Pediatric and General OPD cases to investigate an average queuing time for patients within the system. It seeks to highlight the proper queuing model by investigating the kind of distributions functions over patient’s arrival time, inter-arrival time, waiting time and service time. Comparable with the standard set values by WHO, the study found that patients at this HC spend more waiting times than service times. On model investigation, different days presented different models ranging from an assumed M/M/1, M/M/2 to M/Er/2. As such, through sensitivity analysis, in general, a commonly assumed M/M/1 model failed to fit the data but rather an M/Er/2 demonstrated to fit well. An M/Er/3 model seemed to be good in terms of measuring resource utilization, proposing a need to increase medical personnel at this HC. However, an M/Er/4 showed to cause more idleness of human resources.Keywords: health care, out-patient department, queuing model, sensitivity analysis
Procedia PDF Downloads 43919393 Experimental Study on Ultrasonic Shot Peening Forming and Surface Properties of AALY12
Authors: Shi-hong Lu, Chao-xun Liu, Yi-feng Zhu
Abstract:
Ultrasonic shot peening (USP) on AALY12 sheet was studied. Several parameters (arc heights, surface roughness, surface topography and microhardness) with different USP process parameters were measured. The research proposes that the radius of curvature of shot peened sheet increases with time and electric current decreasing, while it increases with pin diameter increasing, and radius of curvature reaches a saturation level after a specific processing time and electric current. An empirical model of the relationship between radius of curvature and pin diameter, electric current, time was also obtained. The research shows that the increment of surface and vertical microhardness of material is more obvious with longer time and higher value of electric current, which can be up to 20% and 28% respectively.Keywords: USP forming, surface properties, radius of curvature, residual stress
Procedia PDF Downloads 52119392 Time Lag Analysis for Readiness Potential by a Firing Pattern Controller Model of a Motor Nerve System Considered Innervation and Jitter
Authors: Yuko Ishiwaka, Tomohiro Yoshida, Tadateru Itoh
Abstract:
Human makes preparation called readiness potential unconsciously (RP) before awareness of their own decision. For example, when recognizing a button and pressing the button, the RP peaks are observed 200 ms before the initiation of the movement. It has been known that the preparatory movements are acquired before actual movements, but it has not been still well understood how humans can obtain the RP during their growth. On the proposition of why the brain must respond earlier, we assume that humans have to adopt the dangerous environment to survive and then obtain the behavior to cover the various time lags distributed in the body. Without RP, humans cannot take action quickly to avoid dangerous situations. In taking action, the brain makes decisions, and signals are transmitted through the Spinal Cord to the muscles to the body moves according to the laws of physics. Our research focuses on the time lag of the neuron signal transmitting from a brain to muscle via a spinal cord. This time lag is one of the essential factors for readiness potential. We propose a firing pattern controller model of a motor nerve system considered innervation and jitter, which produces time lag. In our simulation, we adopt innervation and jitter in our proposed muscle-skeleton model, because these two factors can create infinitesimal time lag. Q10 Hodgkin Huxley model to calculate action potentials is also adopted because the refractory period produces a more significant time lag for continuous firing. Keeping constant power of muscle requires cooperation firing of motor neurons because a refractory period stifles the continuous firing of a neuron. One more factor in producing time lag is slow or fast-twitch. The Expanded Hill Type model is adopted to calculate power and time lag. We will simulate our model of muscle skeleton model by controlling the firing pattern and discuss the relationship between the time lag of physics and neurons. For our discussion, we analyze the time lag with our simulation for knee bending. The law of inertia caused the most influential time lag. The next most crucial time lag was the time to generate the action potential induced by innervation and jitter. In our simulation, the time lag at the beginning of the knee movement is 202ms to 203.5ms. It means that readiness potential should be prepared more than 200ms before decision making.Keywords: firing patterns, innervation, jitter, motor nerve system, readiness potential
Procedia PDF Downloads 83219391 New-Born Children and Marriage Stability: An Evaluation of Divorce Risk Based on 2010-2018 China Family Panel Studies Data
Authors: Yuchao Yao
Abstract:
As two of the main characteristics of Chinese demographic trends, increasing divorce rates and decreasing fertility rates both shaped the population structure in the recent decade. Figuring out to what extent can be having a child make a difference in the divorce rate of a couple will not only draw a picture of Chinese families but also bring about a new perspective to evaluate the Chinese child-breeding policies. Based on China Family Panel Studies (CFPS) Data 2010-2018, this paper provides a systematic evaluation of how children influence a couple’s marital stability through a series of empirical models. Using survival analysis and propensity score matching (PSM) model, this paper finds that the number and age of children that a couple has mattered in consolidating marital relationship, and these effects vary little over time; during the last decade, newly having children can in fact decrease the possibility of divorce for Chinese couples; the such decreasing effect is largely due to the birth of a second child. As this is an inclusive attempt to study and compare not only the effects but also the causality of children on divorce risk in the last decade, the results of this research will do a good summary of the status quo of divorce in China. Furthermore, this paper provides implications for further reforming the current marriage and child-breeding policies.Keywords: divorce risk, fertility, China, survival analysis, propensity score matching
Procedia PDF Downloads 7819390 Investigate the Effects of Anionic Surfactant on THF Hydrate
Authors: Salah A. Al-Garyani, Yousef Swesi
Abstract:
Gas hydrates can be hazardous to upstream operations. On the other hand, the high gas storage capacity of hydrate may be utilized for natural gas storage and transport. Research on the promotion of hydrate formation, as related to natural gas storage and transport, has received relatively little attention. The primary objective of this study is to gain a better understanding of the effects of ionic surfactants, particularly their molecular structures and concentration, on the formation of tetrahydrofuran (THF) hydrate, which is often used as a model hydrate former for screening hydrate promoters or inhibitors. The surfactants studied were sodium n-dodecyl sulfate (SDS), sodium n-hexadecyl sulfate (SHS). Our results show that, at concentrations below the solubility limit, the induction time decreases with increasing surfactant concentration. At concentrations near or above the solubility, however, the surfactant concentration no longer has any effect on the induction time. These observations suggest that the effect of surfactant on THF hydrate formation is associated with surfactant monomers, not the formation of micelle as previously reported. The lowest induction time (141.25 ± 21 s, n = 4) was observed in a solution containing 7.5 mM SDS. The induction time decreases by a factor of three at concentrations near or above the solubility, compared to that without surfactant.Keywords: tetrahydrofuran, hydrate, surfactant, induction time, monomers, micelle
Procedia PDF Downloads 41419389 Basic Modal Displacements (BMD) for Optimizing the Buildings Subjected to Earthquakes
Authors: Seyed Sadegh Naseralavi, Mohsen Khatibinia
Abstract:
In structural optimizations through meta-heuristic algorithms, analyses of structures are performed for many times. For this reason, performing the analyses in a time saving way is precious. The importance of the point is more accentuated in time-history analyses which take much time. To this aim, peak picking methods also known as spectrum analyses are generally utilized. However, such methods do not have the required accuracy either done by square root of sum of squares (SRSS) or complete quadratic combination (CQC) rules. The paper presents an efficient technique for evaluating the dynamic responses during the optimization process with high speed and accuracy. In the method, first by using a static equivalent of the earthquake, an initial design is obtained. Then, the displacements in the modal coordinates are achieved. The displacements are herein called basic modal displacements (MBD). For each new design of the structure, the responses can be derived by well scaling each of the MBD along the time and amplitude and superposing them together using the corresponding modal matrices. To illustrate the efficiency of the method, an optimization problems is studied. The results show that the proposed approach is a suitable replacement for the conventional time history and spectrum analyses in such problems.Keywords: basic modal displacements, earthquake, optimization, spectrum
Procedia PDF Downloads 36419388 Performance Evaluation and Kinetics of Artocarpus heterophyllus Seed for the Purification of Paint Industrial Wastewater by Coagulation-Flocculation Process
Authors: Ifeoma Maryjane Iloamaeke, Kelvin Obazie, Mmesoma Offornze, Chiamaka Marysilvia Ifeaghalu, Cecilia Aduaka, Ugomma Chibuzo Onyeije, Claudine Ifunanaya Ogu, Ngozi Anastesia Okonkwo
Abstract:
This work investigated the effects of pH, settling time, and coagulant dosages on the removal of color, turbidity, and heavy metals from paint industrial wastewater using the seed of Artocarpus heterophyllus (AH) by the coagulation-flocculation process. The paint effluent was physicochemically characterized, while AH coagulant was instrumentally characterized by Scanning Electron Microscope (SEM), Fourier Transform Infrared (FTIR), and X-ray diffraction (XRD). A Jar test experiment was used for the coagulation-flocculation process. The result showed that paint effluent was polluted with color, turbidity (36000 NTU), mercury (1.392 mg/L), lead (0.252 mg/L), arsenic (1.236 mg/L), TSS (63.40mg/L), and COD (121.70 mg/L). The maximum color removal efficiency was 94.33% at the dosage of 0.2 g/L, pH 2 at a constant time of 50 mins, and 74.67% at constant pH 2, coagulant dosage of 0.2 g/L and 50 mins. The highest turbidity removal efficiency was 99.94% at 0.2 g/L and 50 mins at constant pH 2 and 96.66% at pH 2 and 0.2 g/L at constant time of 50 mins. The mercury removal efficiency of 99.29% was achieved at the optimal condition of 0.8 g/L coagulant dosage, pH 8, and constant time of 50 mins and 99.57% at coagulant dosage of 0.8 g/L, time of 50 mins constant pH 8. The highest lead removal efficiency was 99.76% at a coagulant dosage of 10 g/L, time of 40 mins at constant pH 10, and 96.53% at pH 10, coagulant dosage of 10 g/L and constant time of 40 mins. For arsenic, the removal efficiency is 75.24 % at 0.8 g/L coagulant dosage, time of 40 mins, and constant pH of 8. XRD imaging before treatment showed that Artocarpus heterophyllus coagulant was crystalline and changed to amorphous after treatment. The SEM and FTIR results of the AH coagulant and sludge suggested there were changes in the surface morphology and functional groups before and after treatment. The reaction kinetics were modeled best in the second order.Keywords: Artocarpus heterophyllus, coagulation-flocculation, coagulant dosages, setting time, paint effluent
Procedia PDF Downloads 10419387 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study
Authors: D. M. Samartsev, A. G. Copping
Abstract:
As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.Keywords: analysis, architecture, automation, design process, technology
Procedia PDF Downloads 10819386 Response of Chickpea (Cicer arietinum L.) Genotypes to Drought Stress at Different Growth Stages
Authors: Ali. Marjani, M. Farsi, M. Rahimizadeh
Abstract:
Chickpea (Cicer arietinum L.) is one of the important grain legume crops in the world. However, drought stress is a serious threat to chickpea production, and development of drought-resistant varieties is a necessity. Field experiments were conducted to evaluate the response of 8 chickpea genotypes (MCC* 696, 537, 80, 283, 392, 361, 252, 397) and drought stress (S1: non-stress, S2: stress at vegetative growth stage, S3: stress at early bloom, S4: stress at early pod visible) at different growth stages. Experiment was arranged in split plot design with four replications. Difference among the drought stress time was found to be significant for investigated traits except biological yield. Differences were observed for genotypes in flowering time, pod information time, physiological maturation time and yield. Plant height reduced due to drought stress in vegetative growth stage. Stem dry weight reduced due to drought stress in pod visibly. Flowering time, maturation time, pod number, number of seed per plant and yield cause of drought stress in flowering was also reduced. The correlation between yield and number of seed per plant and biological yield was positive. The MCC283 and MCC696 were the high-tolerance genotypes. These results demonstrated that drought stress delayed phonological growth in chickpea and that flowering stage is sensitive.Keywords: chickpea, drought stress, growth stage, tolerance
Procedia PDF Downloads 26519385 The Effect of Adolescents’ Grit on Stem Creativity: The Mediation of Creative Self-Efficacy and the Moderation of Future Time Perspective
Authors: Han Kuikui
Abstract:
Adolescents, serving as the reserve force for technological innovation talents, possess STEM creativity that is not only pivotal to achieving STEM education goals but also provides a viable path for reforming science curricula in compulsory education and cultivating innovative talents in China. To investigate the relationship among adolescents' grit, creative self-efficacy, future time perspective, and STEM creativity, a survey was conducted in 2023 using stratified random sampling. A total of 1263 junior high school students from the main urban areas of Chongqing, from grade 7 to grade 9, were sampled. The results indicated that (1) Grit positively predicts adolescents' creative self-efficacy and STEM creativity significantly; (2) Creative self-efficacy mediates the positive relationship between grit and adolescents' STEM creativity; (3) The mediating role of creative self-efficacy is moderated by future time perspective, such that with a higher future time perspective, the positive predictive effect of grit on creative self-efficacy is more substantial, which in turn positively affects their STEM creativity.Keywords: grit, stem creativity, creative self-efficacy, future time perspective
Procedia PDF Downloads 5919384 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 11319383 Anomaly Detection in Financial Markets Using Tucker Decomposition
Authors: Salma Krafessi
Abstract:
The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models
Procedia PDF Downloads 7419382 Chitosan Hydrogel Containing Nitric Oxide Donors with Potent Antibacterial Effect
Authors: Milena Trevisan Pelegrino, Bruna De Araujo Lima, Mônica H. M. Do Nascimento, Christiane B. Lombello, Marcelo Brocchi, Amedea B. Seabra
Abstract:
Nitric oxide (NO) is a small molecule involved in a wide range of physiological and pathophysiological processes, including vasodilatation, control of inflammatory pain, wound healing, and antibacterial activities. As NO is a free radical, the design of drugs that generates therapeutic amounts of NO in controlled spatial and time manners is still a challenge. In this study, the NO donor S-nitrosoglutathione (GSNO) was incorporated into the thermoresponsive Pluronic F-127 (PL) - chitosan (CS) hydrogel, in an easy and economically feasible methodology. CS is a polysaccharide with known antimicrobial and biocompatibility properties. Scanning electron microscopy, rheology and differential scanning calorimetry techniques were used for hydrogel characterization. The results demonstrated that the hydrogel has a smooth surface, thermoresponsive behavior, and good mechanical stability. The kinetics of NO release and GSNO diffusion from GSNO-containing PL/CS hydrogel demonstrated a sustained NO/GSNO release, in concentrations suitable for biomedical applications, at physiological and skin temperatures. The GSNO-PL/CS hydrogel demonstrated a concentration-dependent toxicity to Vero cells, and antimicrobial activity to Pseudomonas aeruginosa (minimum inhibitory concentration and minimum bactericidal concentration values of 0.5 µg·mL-1 of hydrogel, which correspondents to 1 mmol·L-1 of GSNO). Interesting, the concentration range in which the NO-releasing hydrogel demonstrated antibacterial effect was not found toxic to Vero mammalian cell. Thus, GSNO-PL/CS hydrogel is suitable biomaterial for topical NO delivery applications.Keywords: antimicrobial, chitosan, biocompatibility, S-nitrosothiols
Procedia PDF Downloads 19119381 Impact of Regulation on Trading in Financial Derivatives in Europe
Authors: H. Florianová, J. Nešleha
Abstract:
Financial derivatives are considered to be risky investment instruments which could possibly bring another financial crisis. As prevention, European Union and its member states have released new legal acts adjusting this area of law in recent years. There have been several cases in history of capital markets worldwide where it was shown that legislature may affect behavior of subjects on capital markets. In our paper we analyze main events on selected European stock exchanges in order to apply them on three chosen markets - Czech capital market represented by Prague Stock Exchange, German capital market represented by Deutsche Börse and Polish capital market represented by Warsaw Stock Exchange. We follow time series of development of the sum of listed derivatives on these three stock exchanges in order to evaluate popularity of those exchanges. Afterwards we compare newly listed derivatives in relation to the speed of development of these exchanges. We also make a comparison between trends in derivatives and shares development. We explain how a legal regulation may affect situation on capital markets. If the regulation is too strict, potential investors or traders are not willing to undertake it and move to other markets. On the other hand, if the regulation is too vague, trading scandals occur and the market is not reliable from the prospect of potential investors or issuers. We see that making the regulation stricter usually discourages subjects to stay on the market immediately although making the regulation vaguer to interest more subjects is usually much slower process.Keywords: capital markets, financial derivatives, investors' behavior, regulation
Procedia PDF Downloads 27219380 Restoration of Railway Turnout Frog with FCAW
Authors: D. Sergejevs, A. Tipainis, P. Gavrilovs
Abstract:
Railway turnout frogs restored with MMA often have such defects as infusions, pores, a.o., which under the influence of dynamic forces cause premature destruction of the restored surfaces. To prolong the operational time of turnout frog, i.e. operational time of the restored surface, turnout frog was restored using FCAW and afterwards matallographic examination was performed. Experimental study revealed that railway turnout frog restored with FCAW had better quality than elements restored with MMA, furthermore it provided considerable time economy.Keywords: elements of railway turnout, FCAW, metallographic examination, quality of build-up welding
Procedia PDF Downloads 64719379 Behavior Adoption on Marine Habitat Conservation in Indonesia
Authors: Muhammad Yayat Afianto, Darmawan, Agung Putra Utama, Hari Kushardanto
Abstract:
Fish Forever, Rare’s innovative coastal fisheries program, combined community-based conservation management approach with spatial management to restore and protect Indonesia’s small-scale fisheries by establishing Fishing Managed Access Area. A ‘TURF-Reserve’ is a fishery management approach that positions fishers at the center of fisheries management, empowering them to take care of and make decisions about the future of their fishery. After two years of the program, social marketing campaigns succeeded in changing their behavior by adopting the new conservation behavior. The Pride-TURF-R campaigns developed an overarching hypothesis of impact that captured the knowledge, attitude and behavior changes needed to reduce threats and achieve conservation results. Rare help Batu Belah fishers to develop their group, developed with their roles, sustainable fisheries plan, and the budget plan. On 12th February 2017, the Head of Loka Kawasan Konservasi Perairan Nasional (LKKPN) which is a Technical Implementation Unit for National Marine Conservation Areas directly responsible to the Directorate General for Marine Spatial Management in the Ministry of Marine Affairs and Fisheries had signed a Partnership Agreement with the Head of Batu Belah Village to manage a TURF+Reserve area as wide as 909 hectares. The fishers group have been collecting the catch and submitting the report monthly, initiated the installation of the buoy markers for the No Take Zone, and formed the Pokmaswas (community-based surveillance group). Prior to this behavior adoption, they don’t have any fisheries data, no group of fishers, and they have still fishing inside the No Take Zone. This is really a new behavior adoption for them. This paper will show the process and success story of the social marketing campaign to conserve marine habitat in Anambas through Pride-TURF-R program.Keywords: behavior adoption, community participation, no take zone, pride-TURF-R
Procedia PDF Downloads 27619378 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 45619377 How Digital Empowerment Affects Dissolution of Segmentation Effect and Construction of Opinion Leaders in Isolated Communities: Ethnographic Investigation of Leprosy Rehabilitation Groups
Authors: Lin Zhang
Abstract:
The fear of leprosy has been longstanding throughout the human history. In an era where isolation is practiced as a means of epidemic prevention, the leprosy rehabilitation group has itself become an isolated community with an entrenched metaphor. In the process of new mediatization of the leprosy isolation community, what are the relations among media literacy, the leprosy internalized stigma and social support? To address the question, the “portrait” of leprosy rehabilitation group is re-delineated through two field studies in the “post-leprosy age” in 2012 and 2020, respectively. Taking an isolation community on Si’an Leprosy Island in Dongguan City, Guangdong Province, China as the study object, it is found that new media promotes the dissolution of segregation effect of the leprosy isolation community and the cultivation of opinion leaders by breaking spatial, psychological and social segregation and by building a community of village affairs and public space in the following way: the cured patients with high new media literacy, especially those who use WeChat and other applications and largely rely on new media for information, have a low level of leprosy internalized stigma and a high level of social support, and they are often the opinion leaders inside their community; on the contrary, the cured patients with low new media literacy, a high level of leprosy internalized stigma and a low level of social support are often the followers inside their community. Such effects of dissolution and construction are reflected not only in the vertical differentiation of the same individual at different times, but also in the horizontal differentiation between different individuals at the same time.Keywords: segregation, the leprosy rehabilitation group, new mediatization, digital empowerment, opinion leaders
Procedia PDF Downloads 18019376 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent
Authors: Faidon Kyriakou, William Dempster, David Nash
Abstract:
Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.Keywords: AAA, efficiency, finite element analysis, stent deployment
Procedia PDF Downloads 19719375 Integrated Intensity and Spatial Enhancement Technique for Color Images
Authors: Evan W. Krieger, Vijayan K. Asari, Saibabu Arigela
Abstract:
Video imagery captured for real-time security and surveillance applications is typically captured in complex lighting conditions. These less than ideal conditions can result in imagery that can have underexposed or overexposed regions. It is also typical that the video is too low in resolution for certain applications. The purpose of security and surveillance video is that we should be able to make accurate conclusions based on the images seen in the video. Therefore, if poor lighting and low resolution conditions occur in the captured video, the ability to make accurate conclusions based on the received information will be reduced. We propose a solution to this problem by using image preprocessing to improve these images before use in a particular application. The proposed algorithm will integrate an intensity enhancement algorithm with a super resolution technique. The intensity enhancement portion consists of a nonlinear inverse sign transformation and an adaptive contrast enhancement. The super resolution section is a single image super resolution technique is a Fourier phase feature based method that uses a machine learning approach with kernel regression. The proposed technique intelligently integrates these algorithms to be able to produce a high quality output while also being more efficient than the sequential use of these algorithms. This integration is accomplished by performing the proposed algorithm on the intensity image produced from the original color image. After enhancement and super resolution, a color restoration technique is employed to obtain an improved visibility color image.Keywords: dynamic range compression, multi-level Fourier features, nonlinear enhancement, super resolution
Procedia PDF Downloads 558