Search results for: fast Fourier transform
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3342

Search results for: fast Fourier transform

402 The Characterization and Optimization of Bio-Graphene Derived From Oil Palm Shell Through Slow Pyrolysis Environment and Its Electrical Conductivity and Capacitance Performance as Electrodes Materials in Fast Charging Supercapacitor Application

Authors: Nurhafizah Md. Disa, Nurhayati Binti Abdullah, Muhammad Rabie Bin Omar

Abstract:

This research intends to identify the existing knowledge gap because of the lack of substantial studies to fabricate and characterize bio-graphene created from Oil Palm Shell (OPS) through the means of pre-treatment and slow pyrolysis. By fabricating bio-graphene through OPS, a novel material can be found to procure and used for graphene-based research. The characterization of produced bio-graphene is intended to possess a unique hexagonal graphene pattern and graphene properties in comparison to other previously fabricated graphene. The OPS will be fabricated by pre-treatment of zinc chloride (ZnCl₂) and iron (III) chloride (FeCl3), which then induced the bio-graphene thermally by slow pyrolysis. The pyrolizer's final temperature and resident time will be set at 550 °C, 5/min, and 1 hour respectively. Finally, the charred product will be washed with hydrochloric acid (HCL) to remove metal residue. The obtained bio-graphene will undergo different analyses to investigate the physicochemical properties of the two-dimensional layer of carbon atoms with sp2 hybridization hexagonal lattice structure. The analysis that will be taking place is Raman Spectroscopy (RAMAN), UV-visible spectroscopy (UV-VIS), Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM), and X-Ray Diffraction (XRD). In retrospect, RAMAN is used to analyze three key peaks found in graphene, namely D, G, and 2D peaks, which will evaluate the quality of the bio-graphene structure and the number of layers generated. To compare and strengthen graphene layer resolves, UV-VIS may be used to establish similar results of graphene layer from last layer analysis and also characterize the types of graphene procured. A clear physical image of graphene can be obtained by analyzation of TEM in order to study structural quality and layers condition and SEM in order to study the surface quality and repeating porosity pattern. Lastly, establishing the crystallinity of the produced bio-graphene, simultaneously as an oxygen contamination factor and thus pristineness of the graphene can be done by XRD. In the conclusion of this paper, this study is able to obtain bio-graphene through OPS as a novel material in pre-treatment by chloride ZnCl₂ and FeCl3 and slow pyrolization to provide a characterization analysis related to bio-graphene that will be beneficial for future graphene-related applications. The characterization should yield similar findings to previous papers as to confirm graphene quality.

Keywords: oil palm shell, bio-graphene, pre-treatment, slow pyrolysis

Procedia PDF Downloads 58
401 A Theoretical Approach of Tesla Pump

Authors: Cristian Sirbu-Dragomir, Stefan-Mihai Sofian, Adrian Predescu

Abstract:

This paper aims to study Tesla pumps for circulating biofluids. It is desired to make a small pump for the circulation of biofluids. This type of pump will be studied because it has the following characteristics: It doesn’t have blades which results in very small frictions; Reduced friction forces; Low production cost; Increased adaptability to different types of fluids; Low cavitation (towards 0); Low shocks due to lack of blades; Rare maintenance due to low cavity; Very small turbulences in the fluid; It has a low number of changes in the direction of the fluid (compared to rotors with blades); Increased efficiency at low powers.; Fast acceleration; The need for a low torque; Lack of shocks in blades at sudden starts and stops. All these elements are necessary to be able to make a small pump that could be inserted into the thoracic cavity. The pump will be designed to combat myocardial infarction. Because the pump must be inserted in the thoracic cavity, elements such as Low friction forces, shocks as low as possible, low cavitation and as little maintenance as possible are very important. The operation should be performed once, without having to change the rotor after a certain time. Given the very small size of the pump, the blades of a classic rotor would be very thin and sudden starts and stops could cause considerable damage or require a very expensive material. At the same time, being a medical procedure, the low cost is important in order to be easily accessible to the population. The lack of turbulence or vortices caused by a classic rotor is again a key element because when it comes to blood circulation, the flow must be laminar and not turbulent. The turbulent flow can even cause a heart attack. Due to these aspects, Tesla's model could be ideal for this work. Usually, the pump is considered to reach an efficiency of 40% being used for very high powers. However, the author of this type of pump claimed that the maximum efficiency that the pump can achieve is 98%. The key element that could help to achieve this efficiency or one as close as possible is the fact that the pump will be used for low volumes and pressures. The key elements to obtain the best efficiency for this model are the number of rotors placed in parallel and the distance between them. The distance between them must be small, which helps to obtain a pump as small as possible. The principle of operation of such a rotor is to place in several parallel discs cut inside. Thus the space between the discs creates the vacuum effect by pulling the liquid through the holes in the rotor and throwing it outwards. Also, a very important element is the viscosity of the liquid. It dictates the distance between the disks to achieve a lossless power flow.

Keywords: lubrication, temperature, tesla-pump, viscosity

Procedia PDF Downloads 160
400 Biomechanical Modeling, Simulation, and Comparison of Human Arm Motion to Mitigate Astronaut Task during Extra Vehicular Activity

Authors: B. Vadiraj, S. N. Omkar, B. Kapil Bharadwaj, Yash Vardhan Gupta

Abstract:

During manned exploration of space, missions will require astronaut crewmembers to perform Extra Vehicular Activities (EVAs) for a variety of tasks. These EVAs take place after long periods of operations in space, and in and around unique vehicles, space structures and systems. Considering the remoteness and time spans in which these vehicles will operate, EVA system operations should utilize common worksites, tools and procedures as much as possible to increase the efficiency of training and proficiency in operations. All of the preparations need to be carried out based on studies of astronaut motions. Until now, development and training activities associated with the planned EVAs in Russian and U.S. space programs have relied almost exclusively on physical simulators. These experimental tests are expensive and time consuming. During the past few years a strong increase has been observed in the use of computer simulations due to the fast developments in computer hardware and simulation software. Based on this idea, an effort to develop a computational simulation system to model human dynamic motion for EVA is initiated. This study focuses on the simulation of an astronaut moving the orbital replaceable units into the worksites or removing them from the worksites. Our physics-based methodology helps fill the gap in quantitative analysis of astronaut EVA by providing a multisegment human arm model. Simulation work described in the study improves on the realism of previous efforts, incorporating joint stops to account for the physiological limits of range of motion. To demonstrate the utility of this approach human arm model is simulated virtually using ADAMS/LifeMOD® software. Kinematic mechanism for the astronaut’s task is studied from joint angles and torques. Simulation results obtained is validated with numerical simulation based on the principles of Newton-Euler method. Torques determined using mathematical model are compared among the subjects to know the grace and consistency of the task performed. We conclude that due to uncertain nature of exploration-class EVA, a virtual model developed using multibody dynamics approach offers significant advantages over traditional human modeling approaches.

Keywords: extra vehicular activity, biomechanics, inverse kinematics, human body modeling

Procedia PDF Downloads 319
399 Research on Innovation Service based on Science and Technology Resources in Beijing-Tianjin-Hebei

Authors: Runlian Miao, Wei Xie, Hong Zhang

Abstract:

In China, Beijing-Tianjin-Hebei is regarded as a strategically important region because itenjoys highest development in economic development, opening up, innovative capacity and andpopulation. Integrated development of Beijing-Tianjin-Hebei region is increasingly emphasized by the government recently years. In 2014, it has ascended to one of the national great development strategies by Chinese central government. In 2015, Coordinated Development Planning Compendium for Beijing-Tianjin-Hebei Region was approved. Such decisions signify Beijing-Tianjin-Hebei region would lead innovation-driven economic development in China. As an essential factor to achieve national innovation-driven development and significant part of regional industry chain, the optimization of science and technology resources allocation will exert great influence to regional economic transformation and upgrading and innovation-driven development. However, unbalanced distribution, poor sharing of resources and existence of information isolated islands have contributed to different interior innovation capability, vitality and efficiency, which impeded innovation and growth of the whole region. Under such a background, to integrate and vitalize regional science and technology resources and then establish high-end, fast-responding and precise innovation service system basing on regional resources, would be of great significance for integrated development of Beijing-Tianjin-Hebei region and even handling of unbalanced and insufficient development problem in China. This research uses the method of literature review and field investigation and applies related theories prevailing home and abroad, centering service path of science and technology resources for innovation. Based on the status quo and problems of regional development of Beijing-Tianjin-Hebei, theoretically, the author proposed to combine regional economics and new economic geography to explore solution to problem of low resource allocation efficiency. Further, the author puts forward to applying digital map into resource management and building a platform for information co-building and sharing. At last, the author presents the thought to establish a specific service mode of ‘science and technology plus digital map plus intelligence research plus platform service’ and suggestion on co-building and sharing mechanism of 3 (Beijing, Tianjin and Hebei ) plus 11 (important cities in Hebei Province).

Keywords: Beijing-Tianjin-Hebei, science and technology resources, innovation service, digital platform

Procedia PDF Downloads 143
398 The Processing of Implicit Stereotypes in Contexts of Reading, Using Eye-Tracking and Self-Paced Reading Tasks

Authors: Magali Mari, Misha Muller

Abstract:

The present study’s objectives were to determine how diverse implicit stereotypes affect the processing of written information and linguistic inferential processes, such as presupposition accommodation. When reading a text, one constructs a representation of the described situation, which is then updated, according to new outputs and based on stereotypes inscribed within society. If the new output contradicts stereotypical expectations, the representation must be corrected, resulting in longer reading times. A similar process occurs in cases of linguistic inferential processes like presupposition accommodation. Presupposition accommodation is traditionally regarded as fast, automatic processing of background information (e.g., ‘Mary stopped eating meat’ is quickly processed as Mary used to eat meat). However, very few accounts have investigated if this process is likely to be influenced by domains of social cognition, such as implicit stereotypes. To study the effects of implicit stereotypes on presupposition accommodation, adults were recorded while they read sentences in French, combining two methods, an eye-tracking task and a classic self-paced reading task (where participants read sentence segments at their own pace by pressing a computer key). In one condition, presuppositions were activated with the French definite articles ‘le/la/les,’ whereas in the other condition, the French indefinite articles ‘un/une/des’ was used, triggering no presupposition. Using a definite article presupposes that the object has already been uttered and is thus part of background information, whereas using an indefinite article is understood as the introduction of new information. Two types of stereotypes were under examination in order to enlarge the scope of stereotypes traditionally analyzed. Study 1 investigated gender stereotypes linked to professional occupations to replicate previous findings. Study 2 focused on nationality-related stereotypes (e.g. ‘the French are seducers’ versus ‘the Japanese are seducers’) to determine if the effects of implicit stereotypes on reading are generalizable to other types of implicit stereotypes. The results show that reading is influenced by the two types of implicit stereotypes; in the two studies, the reading pace slowed down when a counter-stereotype was presented. However, presupposition accommodation did not affect participants’ processing of information. Altogether these results show that (a) implicit stereotypes affect the processing of written information, regardless of the type of stereotypes presented, and (b) that implicit stereotypes prevail over the superficial linguistic treatment of presuppositions, which suggests faster processing for treating social information compared to linguistic information.

Keywords: eye-tracking, implicit stereotypes, reading, social cognition

Procedia PDF Downloads 175
397 New Teaching Tools for a Modern Representation of Chemical Bond in the Course of Food Science

Authors: Nicola G. G. Cecca

Abstract:

In Italian IPSSEOAs, high schools that give a vocational education to students that will work in the field of Enogastronomy and Hotel Management, the course of Food Science allows the students to start and see food as a mixture of substances that they will transform during their profession. These substances are characterized not only by a chemical composition but also by a molecular structure that makes them nutritionally active. But the increasing number of new products proposed by Food Industry, the modern techniques of production and transformation, the innovative preparations required by customers have made many information reported in the most wide spread Food Science textbooks not up-to-date or too poor for the people who will work in catering sector. Often Authors offer information aged to Bohr’s Atomic Model and to the ‘Octet Rule’ proposed by G.N. Lewis to describe the Chemical Bond, without giving any reference to new as Orbital Atomic Model and Molecular Orbital Theory that, in the meantime, start to be old themselves. Furthermore, this antiquated information precludes an easy understanding of a wide range of properties of nutritive substances and many reactions in which the food constituents are involved. In this paper, our attention is pointed out to use GEOMAG™ to represent the dynamics with which the chemical bond is formed during the synthesis of the molecules. GEOMAG™ is a toy, produced by the Swiss Company Geomagword S.A., pointed to stimulate in children, aged between 6-10 years, their fantasy and their handling ability and constituted by metallic spheres and metallic magnetic bars coated by coloured plastic materials. The simulation carried out with GEOMAG™ is based on the similitude existing between the Coulomb’s force and the magnetic attraction’s force and in particular between the formulae with which they are calculated. The electrostatic force (F in Newton) that allows the formation of the chemical bond can be calculated by mean Fc = kc q1 q2/d2 where: q1 e q2 are the charge of particles [in Coulomb], d is the distance between the particles [in meters] and kc is the Coulomb’s constant. It is surprising to observe that the attraction’s force (Fm) acting between the magnetic extremities of GEOMAG™ used to simulate the chemical bond can be calculated in the same way by using the formula Fm = km m1 m2/d2 where: m1 e m2 represent the strength of the poles [A•m], d is the distance between the particles [m], km = μ/4π in which μ is the magnetic permeability of medium [N•A-2]. The magnetic attraction can be tested by students by trying to keep the magnetic elements of GEOMAG™ separate by hands or trying to measure by mean an appropriate dynamometric system. Furthermore, by using a dynamometric system to measure the magnetic attraction between the GEOMAG™ elements is possible draw a graphic F=f(d) to verify that the curve obtained during the simulation is very similar to that one hypnotized, around the 1920’s by Linus Pauling to describe the formation of H2+ in according with Molecular Orbital Theory.

Keywords: chemical bond, molecular orbital theory, magnetic attraction force, GEOMAG™

Procedia PDF Downloads 236
396 Pooled Analysis of Three School-Based Obesity Interventions in a Metropolitan Area of Brazil

Authors: Rosely Sichieri, Bruna K. Hassan, Michele Sgambato, Barbara S. N. Souza, Rosangela A. Pereira, Edna M. Yokoo, Diana B. Cunha

Abstract:

Obesity is increasing at a fast rate in low and middle-income countries where few school-based obesity interventions have been conducted. Results of obesity prevention studies are still inconclusive mainly due to underestimation of sample size in cluster-randomized trials and overestimation of changes in body mass index (BMI). The pooled analysis in the present study overcomes these design problems by analyzing 4,448 students (mean age 11.7 years) from three randomized behavioral school-based interventions, conducted in public schools of the metropolitan area of Rio de Janeiro, Brazil. The three studies focused on encouraging students to change their drinking and eating habits over one school year, with monthly 1-h sessions in the classroom. Folders explaining the intervention program and suggesting the participation of the family, such as reducing the purchase of sodas were sent home. Classroom activities were delivered by research assistants in the first two interventions and by the regular teachers in the third one, except for culinary class aimed at developing cooking skills to increase healthy eating choices. The first intervention was conducted in 2005 with 1,140 fourth graders from 22 public schools; the second, with 644 fifth graders from 20 public schools in 2010; and the last one, with 2,743 fifth and sixth graders from 18 public schools in 2016. The result was a non-significant change in BMI after one school year of positive changes in dietary behaviors associated with obesity. Pooled intention-to-treat analysis using linear mixed models was used for the overall and subgroup analysis by BMI status, sex, and race. The estimated mean BMI changes were from 18.93 to 19.22 in the control group and from 18.89 to 19.19 in the intervention group; with a p-value of change over time of 0.94. Control and intervention groups were balanced at baseline. Subgroup analyses were statistically and clinically non-significant, except for the non-overweight/obese group with a 0.05 reduction of BMI comparing the intervention with control. In conclusion, this large pooled analysis showed a very small effect on BMI only in the normal weight students. The results are in line with many of the school-based initiatives that have been promising in relation to modifying behaviors associated with obesity but of no impact on excessive weight gain. Changes in BMI may require great changes in energy balance that are hard to achieve in primary prevention at school level.

Keywords: adolescents, obesity prevention, randomized controlled trials, school-based study

Procedia PDF Downloads 136
395 Friction and Wear Characteristics of Diamond Nanoparticles Mixed with Copper Oxide in Poly Alpha Olefin

Authors: Ankush Raina, Ankush Anand

Abstract:

Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.

Keywords: boundary lubrication, copper oxide, friction, nano diamond

Procedia PDF Downloads 102
394 The Effective Use of the Network in the Distributed Storage

Authors: Mamouni Mohammed Dhiya Eddine

Abstract:

This work aims at studying the exploitation of high-speed networks of clusters for distributed storage. Parallel applications running on clusters require both high-performance communications between nodes and efficient access to the storage system. Many studies on network technologies led to the design of dedicated architectures for clusters with very fast communications between computing nodes. Efficient distributed storage in clusters has been essentially developed by adding parallelization mechanisms so that the server(s) may sustain an increased workload. In this work, we propose to improve the performance of distributed storage systems in clusters by efficiently using the underlying high-performance network to access distant storage systems. The main question we are addressing is: do high-speed networks of clusters fit the requirements of a transparent, efficient and high-performance access to remote storage? We show that storage requirements are very different from those of parallel computation. High-speed networks of clusters were designed to optimize communications between different nodes of a parallel application. We study their utilization in a very different context, storage in clusters, where client-server models are generally used to access remote storage (for instance NFS, PVFS or LUSTRE). Our experimental study based on the usage of the GM programming interface of MYRINET high-speed networks for distributed storage raised several interesting problems. Firstly, the specific memory utilization in the storage access system layers does not easily fit the traditional memory model of high-speed networks. Secondly, client-server models that are used for distributed storage have specific requirements on message control and event processing, which are not handled by existing interfaces. We propose different solutions to solve communication control problems at the filesystem level. We show that a modification of the network programming interface is required. Data transfer issues need an adaptation of the operating system. We detail several propositions for network programming interfaces which make their utilization easier in the context of distributed storage. The integration of a flexible processing of data transfer in the new programming interface MYRINET/MX is finally presented. Performance evaluations show that its usage in the context of both storage and other types of applications is easy and efficient.

Keywords: distributed storage, remote file access, cluster, high-speed network, MYRINET, zero-copy, memory registration, communication control, event notification, application programming interface

Procedia PDF Downloads 195
393 Features of Formation and Development of Possessory Risk Management Systems of Organization in the Russian Economy

Authors: Mikhail V. Khachaturyan, Inga A. Koryagina, Maria Nikishova

Abstract:

The study investigates the impact of the ongoing financial crisis, started in the 2nd half of 2014, on marketing budgets spent by Fast-moving consumer goods companies. In these conditions, special importance is given to efficient possessory risk management systems. The main objective for establishing and developing possessory risk management systems for FMCG companies in a crisis is to analyze the data relating to the external environment and consumer behavior in a crisis. Another important objective for possessory risk management systems of FMCG companies is to develop measures and mechanisms to maintain and stimulate sales. In this regard, analysis of risks and threats which consumers define as the main reasons affecting their level of consumption become important. It is obvious that in crisis conditions the effective risk management systems responsible for development and implementation of strategies for consumer demand stimulation, as well as the identification, analysis, assessment and management of other types of risks of economic security will be the key to sustainability of a company. In terms of financial and economic crisis, the problem of forming and developing possessory risk management systems becomes critical not only in the context of management models of FMCG companies, but for all the companies operating in other sectors of the Russian economy. This study attempts to analyze the specifics of formation and development of company possessory risk management systems. In the modern economy, special importance among all the types of owner’s risks has the risk of reduction in consumer activity. This type of risk is common not only for the consumer goods trade. Study of consumer activity decline is especially important for Russia due to domestic market of consumer goods being still in the development stage, despite its significant growth. In this regard, it is especially important to form and develop possessory risk management systems for FMCG companies. The authors offer their own interpretation of the process of forming and developing possessory risk management systems within owner’s management models of FMCG companies as well as in Russian economy in general. Proposed methods and mechanisms of problem analysis of formation and development of possessory risk management systems in FMCG companies and the results received can be helpful for researchers interested in problems of consumer goods market development in Russia and overseas.

Keywords: FMCG companies, marketing budget, risk management, owner, Russian economy, organization, formation, development, system

Procedia PDF Downloads 356
392 Phytochemical Screening, Proximate Analysis, Lethality Studies and Anti-Tumor Potential of Annona muricata L. (Soursop) Fruit Extract in Rattus novergicus

Authors: O. C. Abbah, O. Obidoa, J. Omale

Abstract:

Prostate tumor is fast becoming a leading cause of morbidity and mortality in human male adults, with 50 percent of men aged 50 years and above having histological evidence of the benign tumor. The study was set out to undertake phytochemical screening and proximate analysis of the pulp of A. muricata fruit - soursop; to determine the acute toxicity of the fruit pulp extract and its effect on male albino Wistar rats with concurrent induction of experimental benign prostate hyperplasia (BPH). Eighteen rats (average weight of 100g) were used for the lethality studies and were orally administered graded doses of aqueous extracts of the fruit pulp up to 5000 mg/kg body weight. Twenty five rats weighing 150-200g were divided into five groups of five rats each for the tumor studies. The groups included four controls – Hormone control, HC, which took Testosterone, T; and Estradiol, E2 – only, in olive oil as vehicle; Vehicle control, VC; Soursop control, SC, which received the extract only; VS, Vehicle and Soursop – and the Test group, TG (500mg/kg b.w.). All rats were dosed orally. Tumor was induced with exogenous Testosterone propionate: Estradiol valerate at 300µg: 80µg/kg b.w. (respectively) in olive oil, administered subcutaneously in the inguinal region of the rats on alternate days for 21 days. Administration of the fruit pulp at graded doses up to 5000mg/kg resulted in no lethality even after 72 hours. Results from tumor studies revealed that the administration of the fruit extracts significantly (p < 0.05) reduced the relative prostate weight of the TG compared with the HC, with values of 006±0.001 and 0.010±0.003 respectively. Treatment with vehicle, soursop and vehicle with soursop caused no significant (p>0.05) change in prostate size, with their respective relative prostate weights being 0.002±0.001, 0.004±0.002 and 0.002±0.001 compared with TG. Also, treatment with A. muricata fruit extract significantly decreased (p < 0.05) serum prostate specific antigen, PSA, in TG compared with HC, with values 0.055±0.017 and 0.194±0.068 ng/ml respectively. Furthermore, A. muricata administration displayed Testosterone boosting, Estradiol lowering and consequently testosterone-estradiol ratio increasing potential at the end of the 21 days. The preventive property of soursop against experimental BPH was corroborated by histological evidence in this study. The study concludes that A. muricata fruit holds a great potential for benign prostate tumor prevention and, possibly, management.

Keywords: annona muricata, benign prostate tumor, hormone, preventive potential, soursop

Procedia PDF Downloads 285
391 Effects of Evening vs. Morning Training on Motor Skill Consolidation in Morning-Oriented Elderly

Authors: Maria Korman, Carmit Gal, Ella Gabitov, Avi Karni

Abstract:

The main question addressed in this study was whether the time-of-day wherein training is afforded is a significant factor for motor skill ('how-to', procedural knowledge) acquisition and consolidation into long term memory in the healthy elderly population. Twenty-nine older adults (60-75 years) practiced an explicitly instructed 5-element key-press sequence by repeatedly generating the sequence ‘as fast and accurately as possible’. Contribution of three parameters to acquisition, 24h post-training consolidation, and 1-week retention gains in motor sequence speed was assessed: (a) time of training (morning vs. evening group) (b) sleep quality (actigraphy) and (c) chronotype. All study participants were moderately morning type, according to the Morningness-Eveningness Questionnaire score. All participants had sleep patterns typical of age, with average sleep efficiency of ~ 82%, and approximately 6 hours of sleep. Speed of motor sequence performance in both groups improved to a similar extent during training session. Nevertheless, evening group expressed small but significant overnight consolidation phase gains, while morning group showed only maintenance of performance level attained at the end of training. By 1-week retention test, both groups showed similar performance levels with no significant gains or losses with respect to 24h test. Changes in the tapping patterns at 24h and 1-week post-training were assessed based on normalized Pearson correlation coefficients using the Fisher’s z-transformation in reference to the tapping pattern attained at the end of the training. Significant differences between the groups were found: the evening group showed larger changes in tapping patterns across the consolidation and retention windows. Our results show that morning-oriented older adults effectively acquired, consolidated, and maintained a new sequence of finger movements, following both morning and evening practice sessions. However, time-of-training affected the time-course of skill evolution in terms of performance speed, as well as the re-organization of tapping patterns during the consolidation period. These results are in line with the notion that motor training preceding a sleep interval may be beneficial for the long-term memory in the elderly. Evening training should be considered an appropriate time window for motor skill learning in older adults, even in individuals with morning chronotype.

Keywords: time-of-day, elderly, motor learning, memory consolidation, chronotype

Procedia PDF Downloads 112
390 Simulation and Characterization of Stretching and Folding in Microchannel Electrokinetic Flows

Authors: Justo Rodriguez, Daming Chen, Amador M. Guzman

Abstract:

The detection, treatment, and control of rapidly propagating, deadly viruses such as COVID-19, require the development of inexpensive, fast, and accurate devices to address the urgent needs of the population. Microfluidics-based sensors are amongst the different methods and techniques for detection that are easy to use. A micro analyzer is defined as a microfluidics-based sensor, composed of a network of microchannels with varying functions. Given their size, portability, and accuracy, they are proving to be more effective and convenient than other solutions. A micro analyzer based on the concept of “Lab on a Chip” presents advantages concerning other non-micro devices due to its smaller size, and it is having a better ratio between useful area and volume. The integration of multiple processes in a single microdevice reduces both the number of necessary samples and the analysis time, leading the next generation of analyzers for the health-sciences. In some applications, the flow of solution within the microchannels is originated by a pressure gradient, which can produce adverse effects on biological samples. A more efficient and less dangerous way of controlling the flow in a microchannel-based analyzer is applying an electric field to induce the fluid motion and either enhance or suppress the mixing process. Electrokinetic flows are characterized by no less than two non-dimensional parameters: the electric Rayleigh number and its geometrical aspect ratio. In this research, stable and unstable flows have been studied numerically (and when possible, will be experimental) in a T-shaped microchannel. Additionally, unstable electrokinetic flows for Rayleigh numbers higher than critical have been characterized. The flow mixing enhancement was quantified in relation to the stretching and folding that fluid particles undergo when they are subjected to supercritical electrokinetic flows. Computational simulations were carried out using a finite element-based program while working with the flow mixing concepts developed by Gollub and collaborators. Hundreds of seeded massless particles were tracked along the microchannel from the entrance to exit for both stable and unstable flows. After post-processing, their trajectories, the folding and stretching values for the different flows were found. Numerical results show that for supercritical electrokinetic flows, the enhancement effects of the folding and stretching processes become more apparent. Consequently, there is an improvement in the mixing process, ultimately leading to a more homogenous mixture.

Keywords: microchannel, stretching and folding, electro kinetic flow mixing, micro-analyzer

Procedia PDF Downloads 102
389 Further Development of Offshore Floating Solar and Its Design Requirements

Authors: Madjid Karimirad

Abstract:

Floating solar was not very well-known in the renewable energy field a decade ago; however, there has been tremendous growth internationally with a Compound Annual Growth Rate (CAGR) of nearly 30% in recent years. To reach the goal of global net-zero emission by 2050, all renewable energy sources including solar should be used. Considering that 40% of the world’s population lives within 100 kilometres of the coasts, floating solar in coastal waters is an obvious energy solution. However, this requires more robust floating solar solutions. This paper tries to enlighten the fundamental requirements in the design of floating solar for offshore installations from the hydrodynamic and offshore engineering points of view. In this regard, a closer look at dynamic characteristics, stochastic behaviour and nonlinear phenomena appearing in this kind of structure is a major focus of the current article. Floating solar structures are alternative and very attractive green energy installations with (a) Less strain on land usage for densely populated areas; (b) Natural cooling effect with efficiency gain; and (c) Increased irradiance from the reflectivity of water. Also, floating solar in conjunction with the hydroelectric plants can optimise energy efficiency and improve system reliability. The co-locating of floating solar units with other types such as offshore wind, wave energy, tidal turbines as well as aquaculture (fish farming) can result in better ocean space usage and increase the synergies. Floating solar technology has seen considerable developments in installed capacities in the past decade. Development of design standards and codes of practice for floating solar technologies deployed on both inland water-bodies and offshore is required to ensure robust and reliable systems that do not have detrimental impacts on the hosting water body. Floating solar will account for 17% of all PV energy produced worldwide by 2030. To enhance the development, further research in this area is needed. This paper aims to discuss the main critical design aspects in light of the load and load effects that the floating solar platforms are subjected to. The key considerations in hydrodynamics, aerodynamics and simultaneous effects from the wind and wave load actions will be discussed. The link of dynamic nonlinear loading, limit states and design space considering the environmental conditions is set to enable a better understanding of the design requirements of fast-evolving floating solar technology.

Keywords: floating solar, offshore renewable energy, wind and wave loading, design space

Procedia PDF Downloads 44
388 Boko Haram Insurrection and Religious Revolt in Nigeria: An Impact Assessment-{2009-2015}

Authors: Edwin Dankano

Abstract:

Evident by incessant and sporadic attacks on Nigerians poise a serious threat to the unity of Nigeria, and secondly, the single biggest security nightmare to confront Nigeria since after amalgamation of the Southern and Northern protectorates by the British colonialist in 1914 is “Boko Haram” a terrorist organization also known as “Jama’atul Ahli Sunnah Lidda’wati wal Jihad”, or “people committed to the propagation of the Prophet’s teachings and jihad”. The sect also upholds an ideology translated as “Western Education is forbidden”, or rejection of Western civilization and institutions. By some estimates, more than 5,500 people were killed in Boko Haram attacks in 2014, and Boko Haram attacks have already claimed hundreds of lives and territories {caliphates}in early 2015. In total, the group may have killed more than 10,000 people since its emergence in the early 2000s. More than 1 million Nigerians have been displaced internally by the violence, and Nigerian refugee figures in neighboring countries continue to rise. This paper is predicated on secondary sources of data and anchored on the Huntington’s theory of clash of civilization. As such, the paper argued that the rise of Boko Haram with its violent disposition against Western values is a counter response to Western civilization that is fast eclipsing other civilizations. The paper posits that the Boko Haram insurrection going by its teachings, and destruction of churches is a validation of the propagation of the sect as a religious revolt which has resulted in dire humanitarian situation in Adamawa, Borno, Yobe, Bauchi, and Gombe states all in north eastern Nigeria as evident in human casualties, human right abuses, population displacement, refugee debacle, livelihood crisis, and public insecurity. The paper submits that the Nigerian state should muster the needed political will in terms of a viable anti-terrorism measures and build strong legitimate institutions that can adequately curb the menace of corruption that has engulfed the military hierarchy, respond proactively to the challenge of terrorism in Nigeria and should embrace a strategic paradigm shift from anti-terrorism to counter-terrorism as a strategy for containing the crisis that today threatens the secular status of Nigeria.

Keywords: Boko Haram, civilization, fundamentalism, Islam, religion revolt, terror

Procedia PDF Downloads 377
387 Mobile Learning and Student Engagement in English Language Teaching: The Case of First-Year Undergraduate Students at Ecole Normal Superieur, Algeria

Authors: I. Tiahi

Abstract:

The aim of the current paper is to explore educational practices in contemporary Algeria. Researches explain such practices bear traditional approach and the overlooks modern teaching methods such as mobile learning. That is why the research output of examining student engagement in respect of mobile learning was obtained from the following objectives: (1) To evaluate the current practice of English language teaching within Algerian higher education institutions, (2) To explore how social constructivism theory and m-learning help students’ engagement in the classroom and (3) To explore the feasibility and acceptability of m-learning amongst institutional leaders. The methodology underpins a case study and action research. For the case study, the researcher engaged with 6 teachers, 4 institutional leaders, and 30 students subjected for semi-structured interviews and classroom observations to explore the current teaching methods for English as a foreign language. For the action research, the researcher applied an intervention course to investigate the possibility and implications for future implementation of mobile learning in higher education institutions. The results were deployed using thematic analysis. The research outcome showed that the disengagement of students in English language learning has many aspects. As seen from the interviews from the teachers, the researcher found that they do not have enough resources except for using ppt for some teacher. According to them, the teaching method they are using is mostly communicative and competency-based approach. Teachers informed that students are disengaged because they have psychological barriers. In classroom setting, the students are conscious about social approval from the peer, and thus if they are to face negative reinforcement which would damage their image, it is seen as a preventive mechanism to be scared of committing mistakes. This was also very reflective in this finding. A lot of other arguments can be given for this claim; however, in Algerian setting, it is usual practice where teachers do not provide positive reinforcement which is open up students for possible learning. Thus, in order to overcome such a psychological barrier, proper measures can be taken. On a conclusive remark, it is evident that teachers, students, and institutional leaders provided positive feedback for using mobile learning. It is not only motivating but also engaging in learning processes. Apps such as Kahoot, Padlet and Slido were well received and thus can be taken further to examine its higher impact in Algerian context. Thus, in the future, it will be important to implement m-learning effectively in higher education to transform the current traditional practices into modern, innovative and active learning. Persuasion for this change for stakeholder may be challenging; however, its long-term benefits can be reflective from the current research paper.

Keywords: Algerian context, mobile learning, social constructivism, student engagement

Procedia PDF Downloads 114
386 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era

Authors: Cagri Baris Kasap

Abstract:

In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.

Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking

Procedia PDF Downloads 116
385 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 157
384 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 31
383 STML: Service Type-Checking Markup Language for Services of Web Components

Authors: Saqib Rasool, Adnan N. Mian

Abstract:

Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.

Keywords: REST, STML, type checking, web component

Procedia PDF Downloads 227
382 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 99
381 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator

Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi

Abstract:

Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.

Keywords: equivalent doses, neutron contamination, neutron detector, photon energy

Procedia PDF Downloads 430
380 The Determination of Pb and Zn Phytoremediation Potential and Effect of Interaction between Cadmium and Zinc on Metabolism of Buckwheat (Fagopyrum Esculentum)

Authors: Nurdan Olguncelik Kaplan, Aysen Akay

Abstract:

Nowadays soil pollution has become a global problem. External added polluters to the soil are destroying and changing the structure of the soil and the problems are becoming more complex and in this sense the correction of these problems is going to be harder and more costly. Cadmium has got a fast mobility in the soil and plant system because of that cadmium can interfere very easily to the human and animal food chain and in the same time this can be very dangerous. The cadmium which is absorbed and stored by the plants is causing to many metabolic changes of the plants like; protein synthesis, nitrogen and carbohydrate metabolism, enzyme (nitrate reductase) activation, photo and chlorophyll synthesis. The biological function of cadmium is not known over the plants and it is not a necessary element. The plant is generally taking in small amounts the cadmium and this element is competing with the zinc. Cadmium is causing root damages. Buckwheat (Fagopyrum esculentum) is an important nutraceutical because of its high content of flavonoids, minerals and vitamins, and their nutritionally balanced amino-acid composition. Buckwheat has relatively high biomass productivity, is adapted to many areas of the world, and can flourish in sterile fields; therefore buckwheat plants are widely used for the phytoremediation process.The aim of this study were to evaluate the phytoremediation capacity of the high-yielding plant Buckwheat (Fagopyrum esculentum) in soils contaminated with Cd and Zn. The soils were applied to differrent doses cd(0-12.5-25-50-100 mg Cd kg−1 soil in the form of 3CdSO4.8H2O ) and Zn (0-10-30 mg Zn kg−1 soil in the form of ZnSO4.7H2O) and incubated about 60 days. Later buckwheat seeds were sown and grown for three mounth under greenhouse conditions. The test plants were irrigated by using pure water after the planting process. Buckwheat seeds (Gunes and Aktas species) were taken from Bahri Dagdas International Agricultural Research. After harvest, Cd and Zn concentrations of plant biomass and grain, yield and translocation factors (TFs) for Cd and Cd were determined. Cadmium accumulation in biomass and grain significantly increased in dose-dependent manner. Long term field trials are required to further investigate the potential of buckwheat to reclaimed the soil. But this could be undertaken in conjunction with actual remediation schemes. However, the differences in element accumulation among the genotypes were affected more by the properties of genotypes than by the soil properties. Gunes genotype accumulated higher lead than Aktas genotypes.

Keywords: buckwheat, cadmium, phytoremediation, zinc

Procedia PDF Downloads 396
379 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 38
378 Freight Forwarders’ Liability: A Need for Revival of Unidroit Draft Convention after Six Decades

Authors: Mojtaba Eshraghi Arani

Abstract:

The freight forwarders, who are known as the Architect of Transportation, play a vital role in the supply chain management. The package of various services which they provide has made the legal nature of freight forwarders very controversial, so that they might be qualified once as principal or carrier and, on other occasions, as agent of the shipper as the case may be. They could even be involved in the transportation process as the agent of shipping line, which makes the situation much more complicated. The courts in all countries have long had trouble in distinguishing the “forwarder as agent” from “forwarder as principal” (as it is outstanding in the prominent case of “Vastfame Camera Ltd v Birkart Globistics Ltd And Others” 2005, Hong Kong). It is not fully known that in the case of a claim against the forwarder, what particular parameter would be used by the judge among multiple, and sometimes contradictory, tests for determining the scope of the forwarder liability. In particular, every country has its own legal parameters for qualifying the freight forwarders that is completely different from others, as it is the case in France in comparison with Germany and England. The unpredictability of the courts’ decisions in this regard has provided the freight forwarders with the opportunity to impose any limitation or exception of liability while pretending to play the role of a principal, consequently making the cargo interests incur ever-increasing damage. The transportation industry needs to remove such uncertainty by unifying national laws governing freight forwarders liability. A long time ago, in 1967, The International Institute for Unification of Private Law (UNIDROIT) prepared a draft convention called “Draft Convention on Contract of Agency for Forwarding Agents Relating to International Carriage of Goods” (hereinafter called “UNIDROIT draft convention”). The UNIDROIT draft convention provided a clear and certain framework for the liability of freight forwarder in each capacity as agent or carrier, but it failed to transform to a convention, and eventually, it was consigned to oblivion. Today, after nearly 6 decades from that era, the necessity of such convention can be felt apparently. However, one might reason that the same grounds, in particular, the resistance by forwarders’ association, FIATA, exist yet, and thus it is not logical to revive a forgotten draft convention after such long period of time. It is argued in this article that the main reason for resisting the UNIDROIT draft convention in the past was pending efforts for developing the “1980 United Nation Convention on International Multimodal Transport of Goods”. However, the latter convention failed to become in force on due time in a way that there was no new accession since 1996, as a result of which the UNIDROIT draft convention must be revived strongly and immediately submitted to the relevant diplomatic conference. A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and cases.

Keywords: freight forwarder, revival, agent, principal, uidroit, draft convention

Procedia PDF Downloads 54
377 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers

Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver

Abstract:

Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.

Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN

Procedia PDF Downloads 40
376 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools

Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus

Abstract:

Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.

Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects

Procedia PDF Downloads 243
375 Acrylic Microspheres-Based Microbial Bio-Optode for Nitrite Ion Detection

Authors: Siti Nur Syazni Mohd Zuki, Tan Ling Ling, Nina Suhaity Azmi, Chong Kwok Feng, Lee Yook Heng

Abstract:

Nitrite (NO2-) ion is used prevalently as a preservative in processed meat. Elevated levels of nitrite also found in edible bird’s nests (EBNs). Consumption of NO2- ion at levels above the health-based risk may cause cancer in humans. Spectrophotometric Griess test is the simplest established standard method for NO2- ion detection, however, it requires careful control of pH of each reaction step and susceptible to strong oxidants and dyeing interferences. Other traditional methods rely on the use of laboratory-scale instruments such as GC-MS, HPLC and ion chromatography, which cannot give real-time response. Therefore, it is of significant need for devices capable of measuring nitrite concentration in-situ, rapidly and without reagents, sample pretreatment or extraction step. Herein, we constructed a microspheres-based microbial optode for visual quantitation of NO2- ion. Raoutella planticola, the bacterium expressing NAD(P)H nitrite reductase (NiR) enzyme has been successfully extracted by microbial technique from EBN collected from local birdhouse. The whole cells and the lipophilic Nile Blue chromoionophore were physically absorbed on the photocurable poly(n-butyl acrylate-N-acryloxysuccinimide) [poly (nBA-NAS)] microspheres, whilst the reduced coenzyme NAD(P)H was covalently immobilized on the succinimide-functionalized acrylic microspheres to produce a reagentless biosensing system. Upon the NiR enzyme catalyzes the oxidation of NAD(P)H to NAD(P)+, NO2- ion is reduced to ammonium hydroxide, and that a colour change from blue to pink of the immobilized Nile Blue chromoionophore is perceived as a result of deprotonation reaction increasing the local pH in the microspheres membrane. The microspheres-based optosensor was optimized with a reflectance spectrophotometer at 639 nm and pH 8. The resulting microbial bio-optode membrane could quantify NO2- ion at 0.1 ppm and had a linear response up to 400 ppm. Due to the large surface area to mass ratio of the acrylic microspheres, it allows efficient solid state diffusional mass transfer of the substrate to the bio-recognition phase, and achieve the steady state response as fast as 5 min. The proposed optical microbial biosensor requires no sample pre-treatment step and possesses high stability as the whole cell biocatalyst provides protection to the enzymes from interfering substances, hence it is suitable for measurements in contaminated samples.

Keywords: acrylic microspheres, microbial bio-optode, nitrite ion, reflectometric

Procedia PDF Downloads 414
374 Giant Cancer Cell Formation: A Link between Cell Survival and Morphological Changes in Cancer Cells

Authors: Rostyslav Horbay, Nick Korolis, Vahid Anvari, Rostyslav Stoika

Abstract:

Introduction: Giant cancer cells (GCC) are common in all types of cancer, especially after poor therapy. Some specific features of such cells include ~10-fold enlargement, drug resistance, and the ability to propagate similar daughter cells. We used murine NK/Ly lymphoma, an aggressive and fast growing lymphoma model that has already shown drastic changes in GCC comparing to parental cells (chromatin condensation, nuclear fragmentation, tighter OXPHOS/cellular respiration coupling, multidrug resistance). Materials and methods: In this study, we compared morpho-functional changes of GCC that predominantly show either a cytostatic or a cytotoxic effect after treatment with drugs. We studied the effect of a combined cytostatic/cytotoxic drug treatment to determine the correlation of drug efficiency and GCC formation. Doses of G1/S-specific drug paclitaxel/PTX (G2/M-specific, 50 mg/mouse), vinblastine/VBL (50 mg/mouse), and DNA-targeting agents doxorubicin/DOX (125 ng/mouse) and cisplatin/CP (225 ng/mouse) on C57 black mice. Several tests were chosen to estimate morphological and physiological state (propidium iodide, Rhodamine-123, DAPI, JC-1, Janus Green, Giemsa staining and other), which included cell integrity, nuclear fragmentation and chromatin condensation, mitochondrial activity, and others. A single and double factor ANOVA analysis were performed to determine correlation between the criteria of applied drugs and cytomorphological changes. Results: In all cases of treatment, several morphological changes were observed (intracellular vacuolization, membrane blebbing, and interconnected mitochondrial network). A lower gain in ascites (49.97% comparing to control group) and longest lifespan (22+9 days) after tumor injection was obtained with single VBL and single DOX injections. Such ascites contained the highest number of GCC (83.7%+9.2%), lowest cell count number (72.7+31.0 mln/ml), and a strong correlation coefficient between increased mitochondrial activity and percentage of giant NK/Ly cells. A high number of viable GCC (82.1+9.2%) was observed compared to the parental forms (15.4+11.9%) indicating that GCC are more drug resistant than the parental cells. All this indicates that the giant cell formation and its ability to obtain drug resistance is an expanding field in cancer research.

Keywords: ANOVA, cisplatin, doxorubicin, drug resistance, giant cancer cells, NK/Ly lymphoma, paclitaxel, vinblastine

Procedia PDF Downloads 196
373 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices

Authors: S. Srinivasan, E. Cretu

Abstract:

The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.

Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape

Procedia PDF Downloads 114