Search results for: linear complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4848

Search results for: linear complexity

1098 Exploring the Psychosocial Brain: A Retrospective Analysis of Personality, Social Networks, and Dementia Outcomes

Authors: Felicia N. Obialo, Aliza Wingo, Thomas Wingo

Abstract:

Psychosocial factors such as personality traits and social networks influence cognitive aging and dementia outcomes both positively and negatively. The inherent complexity of these factors makes defining the underlying mechanisms of their influence difficult; however, exploring their interactions affords promise in the field of cognitive aging. The objective of this study was to elucidate some of these interactions by determining the relationship between social network size and dementia outcomes and by determining whether personality traits mediate this relationship. The longitudinal Alzheimer’s Disease (AD) database provided by Rush University’s Religious Orders Study/Memory and Aging Project was utilized to perform retrospective regression and mediation analyses on 3,591 participants. Participants who were cognitively impaired at baseline were excluded, and analyses were adjusted for age, sex, common chronic diseases, and vascular risk factors. Dementia outcome measures included cognitive trajectory, clinical dementia diagnosis, and postmortem beta-amyloid plaque (AB), and neurofibrillary tangle (NT) accumulation. Personality traits included agreeableness (A), conscientiousness (C), extraversion (E), neuroticism (N), and openness (O). The results show a positive correlation between social network size and cognitive trajectory (p-value = 0.004) and a negative relationship between social network size and odds of dementia diagnosis (p = 0.024/ Odds Ratio (OR) = 0.974). Only neuroticism mediates the positive relationship between social network size and cognitive trajectory (p < 2e-16). Agreeableness, extraversion, and neuroticism all mediate the negative relationship between social network size and dementia diagnosis (p=0.098, p=0.054, and p < 2e-16, respectively). All personality traits are independently associated with dementia diagnosis (A: p = 0.016/ OR = 0.959; C: p = 0.000007/ OR = 0.945; E: p = 0.028/ OR = 0.961; N: p = 0.000019/ OR = 1.036; O: p = 0.027/ OR = 0.972). Only conscientiousness and neuroticism are associated with postmortem AD pathologies; specifically, conscientiousness is negatively associated (AB: p = 0.001, NT: p = 0.025) and neuroticism is positively associated with pathologies (AB: p = 0.002, NT: p = 0.002). These results support the study’s objectives, demonstrating that social network size and personality traits are strongly associated with dementia outcomes, particularly the odds of receiving a clinical diagnosis of dementia. Personality traits interact significantly and beneficially with social network size to influence the cognitive trajectory and future dementia diagnosis. These results reinforce previous literature linking social network size to dementia risk and provide novel insight into the differential roles of individual personality traits in cognitive protection.

Keywords: Alzheimer’s disease, cognitive trajectory, personality traits, social network size

Procedia PDF Downloads 121
1097 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method

Authors: Luh Eka Suryani, Purhadi

Abstract:

Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.

Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion

Procedia PDF Downloads 154
1096 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 71
1095 Exploring 1,2,4-Triazine-3(2H)-One Derivatives as Anticancer Agents for Breast Cancer: A QSAR, Molecular Docking, ADMET, and Molecular Dynamics

Authors: Said Belaaouad

Abstract:

This study aimed to explore the quantitative structure-activity relationship (QSAR) of 1,2,4-Triazine-3(2H)-one derivative as a potential anticancer agent against breast cancer. The electronic descriptors were obtained using the Density Functional Theory (DFT) method, and a multiple linear regression techniques was employed to construct the QSAR model. The model exhibited favorable statistical parameters, including R2=0.849, R2adj=0.656, MSE=0.056, R2test=0.710, and Q2cv=0.542, indicating its reliability. Among the descriptors analyzed, absolute electronegativity (χ), total energy (TE), number of hydrogen bond donors (NHD), water solubility (LogS), and shape coefficient (I) were identified as influential factors. Furthermore, leveraging the validated QSAR model, new derivatives of 1,2,4-Triazine-3(2H)-one were designed, and their activity and pharmacokinetic properties were estimated. Subsequently, molecular docking (MD) and molecular dynamics (MD) simulations were employed to assess the binding affinity of the designed molecules. The Tubulin colchicine binding site, which plays a crucial role in cancer treatment, was chosen as the target protein. Through the simulation trajectory spanning 100 ns, the binding affinity was calculated using the MMPBSA script. As a result, fourteen novel Tubulin-colchicine inhibitors with promising pharmacokinetic characteristics were identified. Overall, this study provides valuable insights into the QSAR of 1,2,4-Triazine-3(2H)-one derivative as potential anticancer agent, along with the design of new compounds and their assessment through molecular docking and dynamics simulations targeting the Tubulin-colchicine binding site.

Keywords: QSAR, molecular docking, ADMET, 1, 2, 4-triazin-3(2H)-ones, breast cancer, anticancer, molecular dynamic simulations, MMPBSA calculation

Procedia PDF Downloads 82
1094 A Reduced Ablation Model for Laser Cutting and Laser Drilling

Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz

Abstract:

In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.

Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling

Procedia PDF Downloads 208
1093 Functional Performance of Unpaved Roads Reinforced with Treated Coir Geotextiles

Authors: Priya Jaswal, Vivek, S. K. Sinha

Abstract:

One of the most important and complicated factors influencing the functional performance of unpaved roads is traffic loading. The complexity of traffic loading is caused by the variable magnitude and frequency of load, which causes unpaved roads to fail prematurely. Unpaved roads are low-volume roads, and as peri-urbanization increases, unpaved roads act as a means to boost the rural economy. This has also increased traffic on unpaved roads, intensifying the issue of settlement, rutting, and fatigue failure. This is a major concern for unpaved roads built on poor subgrade soil, as excessive rutting caused by heavy loads can cause driver discomfort, vehicle damage, and an increase in maintenance costs. Some researchers discovered that when a consistent static load is exerted as opposed to a rapidly changing load, the rate of deformation of unpaved roads increases. Previously, some of the most common methods for overcoming the problem of rutting and fatigue failure included chemical stabilisation, fibre reinforcement, and so on. However, due to their high cost, engineers' attention has shifted to geotextiles which are used as reinforcement in unpaved roads. Geotextiles perform the function of filtration, lateral confinement of base material, vertical restraint of subgrade soil, and the tension membrane effect. The use of geotextiles in unpaved roads increases the strength of unpaved roads and is an economically viable method because it reduces the required aggregate thickness, which would need less earthwork, and is thus recommended for unpaved road applications. The majority of geotextiles used previously were polymeric, but with a growing awareness of sustainable development to preserve the environment, researchers' focus has shifted to natural fibres. Coir is one such natural fibre that possesses the advantage of having a higher tensile strength than other bast fibres, being eco-friendly, low in cost, and biodegradable. However, various researchers have discovered that the surface of coir fibre is covered with various impurities, voids, and cracks, which act as a plane of weakness and limit the potential application of coir geotextiles. To overcome this limitation, chemical surface modification of coir geotextiles is widely accepted by researchers because it improves the mechanical properties of coir geotextiles. The current paper reviews the effect of using treated coir geotextiles as reinforcement on the load-deformation behaviour of a two-layered unpaved road model.

Keywords: coir, geotextile, treated, unpaved

Procedia PDF Downloads 88
1092 Clay Hydrogel Nanocomposite for Controlled Small Molecule Release

Authors: Xiaolin Li, Terence Turney, John Forsythe, Bryce Feltis, Paul Wright, Vinh Truong, Will Gates

Abstract:

Clay-hydrogel nanocomposites have attracted great attention recently, mainly because of their enhanced mechanical properties and ease of fabrication. Moreover, the unique platelet structure of clay nanoparticles enables the incorporation of bioactive molecules, such as proteins or drugs, through ion exchange, adsorption or intercalation. This study seeks to improve the mechanical and rheological properties of a novel hydrogel system, copolymerized from a tetrapodal polyethylene glycol (PEG) thiol and a linear, triblock PEG-PPG-PEG (PPG: polypropylene glycol) α,ω-bispropynoate polymer, with the simultaneous incorporation of various amounts of Na-saturated, montmorillonite clay (MMT) platelets (av. lateral dimension = 200 nm), to form a bioactive three-dimensional network. Although the parent hydrogel has controlled swelling ability and its PEG groups have good affinity for the clay platelets, it suffers from poor mechanical stability and is currently unsuitable for potential applications. Nanocomposite hydrogels containing 4wt% MMT showed a twelve-fold enhancement in compressive strength, reaching 0.75MPa, and also a three-fold acceleration in gelation time, when compared with the parent hydrogel. Interestingly, clay nanoplatelet incorporation into the hydrogel slowed down the rate of its dehydration in air. Preliminary results showed that protein binding by the MMT varied with the nature of the protein, as horseradish peroxidase (HRP) was more strongly bound than bovine serum albumin. The HRP was no longer active when bound, presumably as a result of extensive structural refolding. Further work is being undertaken to assess protein binding behaviour within the nanocomposite hydrogel for potential diabetic wound healing applications.

Keywords: hydrogel, nanocomposite, small molecule, wound healing

Procedia PDF Downloads 257
1091 Development and Evaluation of New Complementary Food from Maize, Soya Bean and Moringa for Young Children

Authors: Berhan Fikru

Abstract:

The objective of this study was to develop new complementary food from maize, soybean and moringa for young children. The complementary foods were formulated with linear programming (LP Nutri-survey software) and Faffa (corn soya blend) use as control. Analysis were made for formulated blends and compared with the control and recommended daily intake (RDI). Three complementary foods composed of maize, soya bean, moringa and sugar with ratio of 65:20:15:0, 55:25:15:5 and 65:20:10:5 for blend 1, 2 and 3, respectively. The blends were formulated based on the protein, energy, mineral (iron, zinc an calcium) and vitamin (vitamin A and C) content of foods. The overall results indicated that nutrient content of faffa (control) was 16.32 % protein, 422.31 kcal energy, 64.47 mg calcium, 3.8 mg iron, 1.87mg zinc, 0.19 mg vitamin A and 1.19 vitamin C; blend 1 had 17.16 % protein, 429.84 kcal energy, 330.40 mg calcium, 6.19 mg iron, 1.62 mg zinc, 6.33 mg vitamin A and 4.05 mg vitamin C; blend 2 had 20.26 % protein, 418.79 kcal energy, 417.44 mg calcium, 9.26 mg iron, 2.16 mg zinc, 8.43 mg vitamin A and 4.19 mg vitamin C whereas blend 3 exhibited 16.44 % protein, 417.42 kcal energy, 242.4 mg calcium, 7.09 mg iron, 2.22 mg zinc, 3.69 mg vitamin A and 4.72 mg vitamin C, respectively. The difference was found between all means statically significance (P < 0.05). Sensory evaluation showed that the faffa control and blend 3 were preferred by semi-trained panelists. Blend 3 had better in terms of its mineral and vitamin content than FAFFA corn soya blend and comparable with WFP proprietary products CSB+, CSB++ and fulfills the WHO recommendation for protein, energy and calcium. The suggested formulation with Moringa powder can therefore be used as a complementary food to improve the nutritional status and also help solve problems associated with protein energy and micronutrient malnutrition for young children in developing countries, particularly in Ethiopia.

Keywords: corn soya blend, proximate composition, micronutrient, mineral chelating agents, complementary foods

Procedia PDF Downloads 288
1090 Studying Second Language Development from a Complex Dynamic Systems Perspective

Authors: L. Freeborn

Abstract:

This paper discusses the application of complex dynamic system theory (DST) to the study of individual differences in second language development. This transdisciplinary framework allows researchers to view the trajectory of language development as a dynamic, non-linear process. A DST approach views language as multi-componential, consisting of multiple complex systems and nested layers. These multiple components and systems continuously interact and influence each other at both the macro- and micro-level. Dynamic systems theory aims to explain and describe the development of the language system, rather than make predictions about its trajectory. Such a holistic and ecological approach to second language development allows researchers to include various research methods from neurological, cognitive, and social perspectives. A DST perspective would involve in-depth analyses as well as mixed methods research. To illustrate, a neurobiological approach to second language development could include non-invasive neuroimaging techniques such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) to investigate areas of brain activation during language-related tasks. A cognitive framework would further include behavioural research methods to assess the influence of intelligence and personality traits, as well as individual differences in foreign language aptitude, such as phonetic coding ability and working memory capacity. Exploring second language development from a DST approach would also benefit from including perspectives from the field of applied linguistics, regarding the teaching context, second language input, and the role of affective factors such as motivation. In this way, applying mixed research methods from neurobiological, cognitive, and social approaches would enable researchers to have a more holistic view of the dynamic and complex processes of second language development.

Keywords: dynamic systems theory, mixed methods, research design, second language development

Procedia PDF Downloads 124
1089 Simultaneous Determination of Bisphenol a, Phtalates and Its Metabolites in Human Urine, by Tandem SPE Coupled to GC-MS

Authors: L. Correia-Sá, S. Norberto, Conceição Calhau, C. Delerue-Matos, V. F. Domingues

Abstract:

Endocrine disruptor chemicals (EDCs) are synthetic compounds that even though being initially designed for a specific function are now being linked with a wide range of side effects. The list of possible EDCs is growing and includes phthalates and bisphenol A (BPA). Phthalates are one of the most widely used plasticizers to improve the extensibility, elasticity and workability of polyvinyl chloride (PVC), polyvinyl acetates, etc. Considered non-toxic and harmless additives for polymers, they were used unrestrainedly all over the world for several decades. However, recent studies have indicated that some phthalates and their metabolic products are reproductive and developmental toxicants in animals and suspected endocrine disruptors in humans. BPA (2,2-bis(4-hydroxyphenyl)propane) is a high production volume chemical mainly used in the production of polycarbonate plastics and epoxy resins. Although BPA was initially considered to be a weak environmental estrogen, nowadays it is known that this compound can stimulate several cellular responses at very low levels of concentrations. The aim of this study was to develop a method based on tandem SPE to evaluate the presence of phthalates, metabolites and BPA in human urine samples. The analyzed compounds included: dibutyl phthalate (DBP) and di-2-ethylhexyl phthalate (DEHP), BPA, mono-isobutyl phthalate (MiBP), monobutyl phthalate (MBP) and. mono-(2-ethyl-5-oxohexyl) (MEOHP). Two SPE cartridges were applied both from Phenomenex, the strata X polymeric reversed phase and the strata X A (Strong anion). Chromatographic analyses were carried out in a Thermo GC ULTRA GC-MS/MS. Good recoveries and linear calibration curves were obtained. After validation, the methodology was applied to human urine samples for phthalates, metabolites and BPA evaluation.

Keywords: Bisphenol A (BPA), gas chromatography, metabolites, phtalates, SPE, tandem mode

Procedia PDF Downloads 281
1088 Hemodynamics of a Cerebral Aneurysm under Rest and Exercise Conditions

Authors: Shivam Patel, Abdullah Y. Usmani

Abstract:

Physiological flow under rest and exercise conditions in patient-specific cerebral aneurysm models is numerically investigated. A finite-volume based code with BiCGStab as the linear equation solver is used to simulate unsteady three-dimensional flow field through the incompressible Navier-Stokes equations. Flow characteristics are first established in a healthy cerebral artery for both physiological conditions. The effect of saccular aneurysm on cerebral hemodynamics is then explored through a comparative analysis of the velocity distribution, nature of flow patterns, wall pressure and wall shear stress (WSS) against the reference configuration. The efficacy of coil embolization as a potential strategy of surgical intervention is also examined by modelling coil as a homogeneous and isotropic porous medium where the extended Darcy’s law, including Forchheimer and Brinkman terms, is applicable. The Carreau-Yasuda non-Newtonian blood model is incorporated to capture the shear thinning behavior of blood. Rest and exercise conditions correspond to normotensive and hypertensive blood pressures respectively. The results indicate that the fluid impingement on the outer wall of the arterial bend leads to abnormality in the distribution of wall pressure and WSS, which is expected to be the primary cause of the localized aneurysm. Exercise correlates with elevated flow velocity, vortex strength, wall pressure and WSS inside the aneurysm sac. With the insertion of coils in the aneurysm cavity, the flow bypasses the dilatation, leading to a decline in flow velocities and WSS. Particle residence time is observed to be lower under exercise conditions, a factor favorable for arresting plaque deposition and combating atherosclerosis.

Keywords: 3D FVM, Cerebral aneurysm, hypertension, coil embolization, non-Newtonian fluid

Procedia PDF Downloads 223
1087 Methodologies for Deriving Semantic Technical Information Using an Unstructured Patent Text Data

Authors: Jaehyung An, Sungjoo Lee

Abstract:

Patent documents constitute an up-to-date and reliable source of knowledge for reflecting technological advance, so patent analysis has been widely used for identification of technological trends and formulation of technology strategies. But, identifying technological information from patent data entails some limitations such as, high cost, complexity, and inconsistency because it rely on the expert’ knowledge. To overcome these limitations, researchers have applied to a quantitative analysis based on the keyword technique. By using this method, you can include a technological implication, particularly patent documents, or extract a keyword that indicates the important contents. However, it only uses the simple-counting method by keyword frequency, so it cannot take into account the sematic relationship with the keywords and sematic information such as, how the technologies are used in their technology area and how the technologies affect the other technologies. To automatically analyze unstructured technological information in patents to extract the semantic information, it should be transformed into an abstracted form that includes the technological key concepts. Specific sentence structure ‘SAO’ (subject, action, object) is newly emerged by representing ‘key concepts’ and can be extracted by NLP (Natural language processor). An SAO structure can be organized in a problem-solution format if the action-object (AO) states that the problem and subject (S) form the solution. In this paper, we propose the new methodology that can extract the SAO structure through technical elements extracting rules. Although sentence structures in the patents text have a unique format, prior studies have depended on general NLP (Natural language processor) applied to the common documents such as newspaper, research paper, and twitter mentions, so it cannot take into account the specific sentence structure types of the patent documents. To overcome this limitation, we identified a unique form of the patent sentences and defined the SAO structures in the patents text data. There are four types of technical elements that consist of technology adoption purpose, application area, tool for technology, and technical components. These four types of sentence structures from patents have their own specific word structure by location or sequence of the part of speech at each sentence. Finally, we developed algorithms for extracting SAOs and this result offer insight for the technology innovation process by providing different perspectives of technology.

Keywords: NLP, patent analysis, SAO, semantic-analysis

Procedia PDF Downloads 258
1086 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives

Authors: Chen Guo, Heng Tang, Ben Niu

Abstract:

Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.

Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives

Procedia PDF Downloads 128
1085 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators

Authors: Fathi Abid, Bilel Kaffel

Abstract:

The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.

Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode

Procedia PDF Downloads 330
1084 Engineering a Band Gap Opening in Dirac Cones on Graphene/Tellurium Heterostructures

Authors: Beatriz Muñiz Cano, J. Ripoll Sau, D. Pacile, P. M. Sheverdyaeva, P. Moras, J. Camarero, R. Miranda, M. Garnica, M. A. Valbuena

Abstract:

Graphene, in its pristine state, is a semiconductor with a zero band gap and massless Dirac fermions carriers, which conducts electrons like a metal. Nevertheless, the absence of a bandgap makes it impossible to control the material’s electrons, something that is essential to perform on-off switching operations in transistors. Therefore, it is necessary to generate a finite gap in the energy dispersion at the Dirac point. Intense research has been developed to engineer band gaps while preserving the exceptional properties of graphene, and different strategies have been proposed, among them, quantum confinement of 1D nanoribbons or the introduction of super periodic potential in graphene. Besides, in the context of developing new 2D materials and Van der Waals heterostructures, with new exciting emerging properties, as 2D transition metal chalcogenides monolayers, it is fundamental to know any possible interaction between chalcogenide atoms and graphene-supporting substrates. In this work, we report on a combined Scanning Tunneling Microscopy (STM), Low Energy Electron Diffraction (LEED), and Angle-Resolved Photoemission Spectroscopy (ARPES) study on a new superstructure when Te is evaporated (and intercalated) onto graphene over Ir(111). This new superstructure leads to the electronic doping of the Dirac cone while the linear dispersion of massless Dirac fermions is preserved. Very interestingly, our ARPES measurements evidence a large band gap (~400 meV) at the Dirac point of graphene Dirac cones below but close to the Fermi level. We have also observed signatures of the Dirac point binding energy being tuned (upwards or downwards) as a function of Te coverage.

Keywords: angle resolved photoemission spectroscopy, ARPES, graphene, spintronics, spin-orbitronics, 2D materials, transition metal dichalcogenides, TMDCs, TMDs, LEED, STM, quantum materials

Procedia PDF Downloads 65
1083 On Crack Tip Stress Field in Pseudo-Elastic Shape Memory Alloys

Authors: Gulcan Ozerim, Gunay Anlas

Abstract:

In shape memory alloys, upon loading, stress increases around crack tip and a martensitic phase transformation occurs in early stages. In many studies the stress distribution in the vicinity of the crack tip is represented by using linear elastic fracture mechanics (LEFM) although the pseudo-elastic behavior results in a nonlinear stress-strain relation. In this study, the HRR singularity (Hutchinson, Rice and Rosengren), that uses Rice’s path independent J-integral, is tried to formulate the stress distribution around the crack tip. In HRR approach, the Ramberg-Osgood model for the stress-strain relation of power-law hardening materials is used to represent the elastic-plastic behavior. Although it is recoverable, the inelastic portion of the deformation in martensitic transformation (up to the end of transformation) resembles to that of plastic deformation. To determine the constants of the Ramberg-Osgood equation, the material’s response is simulated in ABAQUS using a UMAT based on ZM (Zaki-Moumni) thermo-mechanically coupled model, and the stress-strain curve of the material is plotted. An edge cracked shape memory alloy (Nitinol) plate is loaded quasi-statically under mode I and modeled using ABAQUS; the opening stress values ahead of the cracked tip are calculated. The stresses are also evaluated using the asymptotic equations of both LEFM and HRR. The results show that in the transformation zone around the crack tip, the stress values are much better represented when the HRR singularity is used although the J-integral does not show path independent behavior. For the nodes very close to the crack tip, the HRR singularity is not valid due to the non-proportional loading effect and high-stress values that go beyond the transformation finish stress.

Keywords: crack, HRR singularity, shape memory alloys, stress distribution

Procedia PDF Downloads 319
1082 Investigating Students' Understanding about Mathematical Concept through Concept Map

Authors: Rizky Oktaviana

Abstract:

The main purpose of studying lies in improving students’ understanding. Teachers usually use written test to measure students’ understanding about learning material especially mathematical learning material. This common method actually has a lack point, such that in mathematics content, written test only show procedural steps to solve mathematical problems. Therefore, teachers unable to see whether students actually understand about mathematical concepts and the relation between concepts or not. One of the best tools to observe students’ understanding about the mathematical concepts is concept map. The goal of this research is to describe junior high school students understanding about mathematical concepts through Concept Maps based on the difference of mathematical ability. There were three steps in this research; the first step was choosing the research subjects by giving mathematical ability test to students. The subjects of this research are three students with difference mathematical ability, high, intermediate and low mathematical ability. The second step was giving concept mapping training to the chosen subjects. The last step was giving concept mapping task about the function to the subjects. Nodes which are the representation of concepts of function were provided in concept mapping task. The subjects had to use the nodes in concept mapping. Based on data analysis, the result of this research shows that subject with high mathematical ability has formal understanding, due to that subject could see the connection between concepts of function and arranged the concepts become concept map with valid hierarchy. Subject with intermediate mathematical ability has relational understanding, because subject could arranged all the given concepts and gave appropriate label between concepts though it did not represent the connection specifically yet. Whereas subject with low mathematical ability has poor understanding about function, it can be seen from the concept map which is only used few of the given concepts because subject could not see the connection between concepts. All subjects have instrumental understanding for the relation between linear function concept, quadratic function concept and domain, co domain, range.

Keywords: concept map, concept mapping, mathematical concepts, understanding

Procedia PDF Downloads 265
1081 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 115
1080 Engineering Topology of Ecological Model for Orientation Impact of Sustainability Urban Environments: The Spatial-Economic Modeling

Authors: Moustafa Osman Mohammed

Abstract:

The modeling of a spatial-economic database is crucial in recitation economic network structure to social development. Sustainability within the spatial-economic model gives attention to green businesses to comply with Earth’s Systems. The natural exchange patterns of ecosystems have consistent and periodic cycles to preserve energy and materials flow in systems ecology. When network topology influences formal and informal communication to function in systems ecology, ecosystems are postulated to valence the basic level of spatial sustainable outcome (i.e., project compatibility success). These referred instrumentalities impact various aspects of the second level of spatial sustainable outcomes (i.e., participant social security satisfaction). The sustainability outcomes are modeling composite structure based on a network analysis model to calculate the prosperity of panel databases for efficiency value, from 2005 to 2025. The database is modeling spatial structure to represent state-of-the-art value-orientation impact and corresponding complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic-ecological model; develop a set of sustainability indicators associated with the model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate spatial structure reliability. The structure of spatial-ecological model is established for management schemes from the perspective pollutants of multiple sources through the input–output criteria. These criteria evaluate the spillover effect to conduct Monte Carlo simulations and sensitivity analysis in a unique spatial structure. The balance within “equilibrium patterns,” such as collective biosphere features, has a composite index of many distributed feedback flows. The following have a dynamic structure related to physical and chemical properties for gradual prolong to incremental patterns. While these spatial structures argue from ecological modeling of resource savings, static loads are not decisive from an artistic/architectural perspective. The model attempts to unify analytic and analogical spatial structure for the development of urban environments in a relational database setting, using optimization software to integrate spatial structure where the process is based on the engineering topology of systems ecology.

Keywords: ecological modeling, spatial structure, orientation impact, composite index, industrial ecology

Procedia PDF Downloads 52
1079 Ionic Liquid and Chemical Denaturants Effects on the Fluorescence Properties of the Laccase

Authors: Othman Saoudi

Abstract:

In this work, we have interested in the investigation of the chemical denaturants and synthesized ionic liquids effects on the fluorescence properties of the laccase from Trametes versicolor. The fluorescence properties of the laccase result from the presence of Tryptophan, which has an aromatic core responsible for the absorption in ultra violet domain and the emission of the photons of fluorescence. The effect Pyrrolidinuim Formate ([pyrr][F]) and Morpholinium Formate ([morph][F]) ionic liquids on the laccase behavior for various volumetric fractions are studied. We have shown that the fluorescence spectrum relative to the [pyrr][F] presents a single band with a maximum around 340 nm and a secondary peak at 361 nm for a volumetric fraction of 20% v/v. For concentration superiors to 40%, the fluorescence intensity decreases and a displacement of the peaks toward higher wavelengths has occurred. For the [morph][F], the fluorescence spectrum showed a single band around 340 nm. The intensity of the principal peak decreases for concentration superiors to 20% v/v. From the plot representing the variation of the λₘₐₓ versus the volumetric concentration, we have determined the concentration of the half-transitions C1/2. These concentrations are equal to 42.62% and 40.91% v/v in the presence of [pyrr][F] and [morph][F] respectively. For the chemical denaturation, we have shown that the fluorescence intensity decreases with increasing denaturant concentrations where the maximum of the wavelength of emission shifts toward the higher wavelengths. We have also determined from the spectrum relative to the urea and GdmCl, the unfolding energy, ∆GD. The results show that the variation of the unfolding energy as a function of the denaturant concentrations varies according to the linear regression model. We have demonstrated also that the half-transitions C1/2 have occurred for urea and GdmCl denaturants concentrations around 3.06 and 3.17 M respectively.

Keywords: laccase, fluorescence, ionic liquids, chemical denaturants

Procedia PDF Downloads 499
1078 Microbial Resource Research Infrastructure: A Large-Scale Research Infrastructure for Microbiological Services

Authors: R. Hurtado-Ortiz, D. Clermont, M. Schüngel, C. Bizet, D. Smith, E. Stackebrandt

Abstract:

Microbiological resources and their derivatives are the essential raw material for the advancement of human health, agro-food, food security, biotechnology, research and development in all life sciences. Microbial resources, and their genetic and metabolic products, are utilised in many areas such as production of healthy and functional food, identification of new antimicrobials against emerging and resistant pathogens, fighting agricultural disease, identifying novel energy sources on the basis of microbial biomass and screening for new active molecules for the bio-industries. The complexity of public collections, distribution and use of living biological material (not only living but also DNA, services, training, consultation, etc.) and service offer, demands the coordination and sharing of policies, processes and procedures. The Microbial Resource Research Infrastructure (MIRRI) is an initiative within the European Strategy Forum Infrastructures (ESFRI), bring together 16 partners including 13 European public microbial culture collections and biological resource centres (BRCs), supported by several European and non-European associated partners. The objective of MIRRI is to support innovation in microbiology by provision of a one-stop shop for well-characterized microbial resources and high quality services on a not-for-profit basis for biotechnology in support of microbiological research. In addition, MIRRI contributes to the structuring of microbial resources capacity both at the national and European levels. This will facilitate access to microorganisms for biotechnology for the enhancement of the bio-economy in Europe. MIRRI will overcome the fragmentation of access to current resources and services, develop harmonised strategies for delivery of associated information, ensure bio-security and other regulatory conditions to bring access and promote the uptake of these resources into European research. Data mining of the landscape of current information is needed to discover potential and drive innovation, to ensure the uptake of high quality microbial resources into research. MIRRI is in its Preparatory Phase focusing on governance and structure including technical, legal governance and financial issues. MIRRI will help the Biological Resources Centres to work more closely with policy makers, stakeholders, funders and researchers, to deliver resources and services needed for innovation.

Keywords: culture collections, microbiology, infrastructure, microbial resources, biotechnology

Procedia PDF Downloads 436
1077 The Changing Role of Technology-Enhanced University Library Reform in Improving College Student Learning Experience and Career Readiness – A Qualitative Comparative Analysis (QCA)

Authors: Xiaohong Li, Wenfan Yan

Abstract:

Background: While it is widely considered that the university library plays a critical role in fulfilling the institution's mission and providing students’ learning experience beyond the classrooms, how the technology-enhanced library reform changed college students’ learning experience hasn’t been thoroughly investigated. The purpose of this study is to explore how technology-enhanced library reform affects students’ learning experience and career readiness and further identify the factors and effective conditions that enable the quality learning outcome of Chinese college students. Methodologies: This study selected the qualitative comparative analysis (QCA) method to explore the effects of technology-enhanced university library reform on college students’ learning experience and career readiness. QCA is unique in explaining the complex relationship between multiple factors from a holistic perspective. Compared with the traditional quantitative and qualitative analysis, QCA not only adds some quantitative logic but also inherits the characteristics of qualitative research focusing on the heterogeneity and complexity of samples. Shenyang Normal University (SNU) selected a sample of the typical comprehensive university in China that focuses on students’ learning and application of professional knowledge and trains professionals to different levels of expertise. A total of 22 current university students and 30 graduates who joined the Library Readers Association of SNU from 2011 to 2019 were selected for semi-structured interviews. Based on the data collected from these participating students, qualitative comparative analysis (QCA), including univariate necessity analysis and the multi-configuration analysis, was conducted. Findings and Discussion: QCA analysis results indicated that the influence of technology-enhanced university library restructures and reorganization on student learning experience and career readiness is the result of multiple factors. Technology-enhanced library equipment and other hardware restructured to meet the college students learning needs and have played an important role in improving the student learning experience and learning persistence. More importantly, the soft characteristics of technology-enhanced library reform, such as library service innovation space and culture space, have a positive impact on student’s career readiness and development. Technology-enhanced university library reform is not only the change in the building's appearance and facilities but also in library service quality and capability. The study also provides suggestions for policy, practice, and future research.

Keywords: career readiness, college student learning experience, qualitative comparative analysis (QCA), technology-enhanced library reform

Procedia PDF Downloads 71
1076 Non Enzymatic Electrochemical Sensing of Glucose Using Manganese Doped Nickel Oxide Nanoparticles Decorated Carbon Nanotubes

Authors: Anju Joshi, C. N. Tharamani

Abstract:

Diabetes is one of the leading cause of death at present and remains an important concern as the prevalence of the disease is increasing at an alarming rate. Therefore, it is crucial to diagnose the accurate levels of glucose for developing an efficient therapeutic for diabetes. Due to the availability of convenient and compact self-testing, continuous monitoring of glucose is feasible nowadays. Enzyme based electrochemical sensing of glucose is quite popular because of its high selectivity but suffers from drawbacks like complicated purification and immobilization procedures, denaturation, high cost, and low sensitivity due to indirect electron transfer. Hence, designing a robust enzyme free platform using transition metal oxides remains crucial for the efficient and sensitive determination of glucose. In the present work, manganese doped nickel oxide nanoparticles (Mn-NiO) has been synthesized onto the surface of multiwalled carbon nanotubes using a simple microwave assisted approach for non-enzymatic electrochemical sensing of glucose. The morphology and structure of the synthesized nanostructures were characterized using scanning electron microscopy (SEM) and X-Ray diffraction (XRD). We demonstrate that the synthesized nanostructures show enormous potential for electrocatalytic oxidation of glucose with high sensitivity and selectivity. Cyclic voltammetry and square wave voltammetry studies suggest superior sensitivity and selectivity of Mn-NiO decorated carbon nanotubes towards the non-enzymatic determination of glucose. A linear response between the peak current and the concentration of glucose has been found to be in the concentration range of 0.01 μM- 10000 μM which suggests the potential efficacy of Mn-NiO decorated carbon nanotubes for sensitive determination of glucose.

Keywords: diabetes, glucose, Mn-NiO decorated carbon nanotubes, non-enzymatic

Procedia PDF Downloads 225
1075 Dividend Policy in Family Controlling Firms from a Governance Perspective: Empirical Evidence in Thailand

Authors: Tanapond S.

Abstract:

Typically, most of the controlling firms are relate to family firms which are widespread and important for economic growth particularly in Asian Pacific region. The unique characteristics of the controlling families tend to play an important role in determining the corporate policies such as dividend policy. Given the complexity of the family business phenomenon, the empirical evidence has been unclear on how the families behind business groups influence dividend policy in Asian markets with the prevalent existence of cross-shareholdings and pyramidal structure. Dividend policy as one of an important determinant of firm value could also be implemented in order to examine the effect of the controlling families behind business groups on strategic decisions-making in terms of a governance perspective and agency problems. The purpose of this paper is to investigate the impact of ownership structure and concentration which are influential internal corporate governance mechanisms in family firms on dividend decision-making. Using panel data and constructing a unique dataset of family ownership and control through hand-collecting information from the nonfinancial companies listed in Stock Exchange of Thailand (SET) between 2000 and 2015, the study finds that family firms with large stakes distribute higher dividends than family firms with small stakes. Family ownership can mitigate the agency problems and the expropriation of minority investors in family firms. To provide insight into the distinguish between ownership rights and control rights, this study examines specific firm characteristics including the degrees of concentration of controlling shareholders by classifying family ownership in different categories. The results show that controlling families with large deviation between voting rights and cash flow rights have more power and affect lower dividend payment. These situations become worse when second blockholders are families. To the best knowledge of the researcher, this study is the first to examine the association between family firms’ characteristics and dividend policy from the corporate governance perspectives in Thailand with weak investor protection environment and high ownership concentration. This research also underscores the importance of family control especially in a context in which family business groups and pyramidal structure are prevalent. As a result, academics and policy makers can develop markets and corporate policies to eliminate agency problem.

Keywords: agency theory, dividend policy, family control, Thailand

Procedia PDF Downloads 273
1074 Commercial Winding for Superconducting Cables and Magnets

Authors: Glenn Auld Knierim

Abstract:

Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.

Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable

Procedia PDF Downloads 133
1073 Collapse Load Analysis of Reinforced Concrete Pile Group in Liquefying Soils under Lateral Loading

Authors: Pavan K. Emani, Shashank Kothari, V. S. Phanikanth

Abstract:

The ultimate load analysis of RC pile groups has assumed a lot of significance under liquefying soil conditions, especially due to post-earthquake studies of 1964 Niigata, 1995 Kobe and 2001 Bhuj earthquakes. The present study reports the results of numerical simulations on pile groups subjected to monotonically increasing lateral loads under design amounts of pile axial loading. The soil liquefaction has been considered through the non-linear p-y relationship of the soil springs, which can vary along the depth/length of the pile. This variation again is related to the liquefaction potential of the site and the magnitude of the seismic shaking. As the piles in the group can reach their extreme deflections and rotations during increased amounts of lateral loading, a precise modeling of the inelastic behavior of the pile cross-section is done, considering the complete stress-strain behavior of concrete, with and without confinement, and reinforcing steel, including the strain-hardening portion. The possibility of the inelastic buckling of the individual piles is considered in the overall collapse modes. The model is analysed using Riks analysis in finite element software to check the post buckling behavior and plastic collapse of piles. The results confirm the kinds of failure modes predicted by centrifuge test results reported by researchers on pile group, although the pile material used is significantly different from that of the simulation model. The extension of the present work promises an important contribution to the design codes for pile groups in liquefying soils.

Keywords: collapse load analysis, inelastic buckling, liquefaction, pile group

Procedia PDF Downloads 149
1072 Synthesis and Characterization of CNPs Coated Carbon Nanorods for Cd2+ Ion Adsorption from Industrial Waste Water and Reusable for Latent Fingerprint Detection

Authors: Bienvenu Gael Fouda Mbanga

Abstract:

This study reports a new approach of preparation of carbon nanoparticles coated cerium oxide nanorods (CNPs/CeONRs) nanocomposite and reusing the spent adsorbent of Cd2+- CNPs/CeONRs nanocomposite for latent fingerprint detection (LFP) after removing Cd2+ ions from aqueous solution. CNPs/CeONRs nanocomposite was prepared by using CNPs and CeONRs with adsorption processes. The prepared nanocomposite was then characterized by using UV-visible spectroscopy (UV-visible), Fourier transforms infrared spectroscopy (FTIR), X-ray diffraction pattern (XRD), scanning electron microscope (SEM), Transmission electron microscopy (TEM), Energy-dispersive X-ray spectroscopy (EDS), Zeta potential, X-ray photoelectron spectroscopy (XPS). The average size of the CNPs was 7.84nm. The synthesized CNPs/CeONRs nanocomposite has proven to be a good adsorbent for Cd2+ removal from water with optimum pH 8, dosage 0. 5 g / L. The results were best described by the Langmuir model, which indicated a linear fit (R2 = 0.8539-0.9969). The adsorption capacity of CNPs/CeONRs nanocomposite showed the best removal of Cd2+ ions with qm = (32.28-59.92 mg/g), when compared to previous reports. This adsorption followed pseudo-second order kinetics and intra particle diffusion processes. ∆G and ∆H values indicated spontaneity at high temperature (40oC) and the endothermic nature of the adsorption process. CNPs/CeONRs nanocomposite therefore showed potential as an effective adsorbent. Furthermore, the metal loaded on the adsorbent Cd2+- CNPs/CeONRs has proven to be sensitive and selective for LFP detection on various porous substrates. Hence Cd2+-CNPs/CeONRs nanocomposite can be reused as a good fingerprint labelling agent in LFP detection so as to avoid secondary environmental pollution by disposal of the spent adsorbent.

Keywords: Cd2+-CNPs/CeONRs nanocomposite, cadmium adsorption, isotherm, kinetics, thermodynamics, reusable for latent fingerprint detection

Procedia PDF Downloads 108
1071 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 99
1070 Development and Validation Method for Quantitative Determination of Rifampicin in Human Plasma and Its Application in Bioequivalence Test

Authors: Endang Lukitaningsih, Fathul Jannah, Arief R. Hakim, Ratna D. Puspita, Zullies Ikawati

Abstract:

Rifampicin is a semisynthetic antibiotic derivative of rifamycin B produced by Streptomyces mediterranei. RIF has been used worldwide as first line drug-prescribed throughout tuberculosis therapy. This study aims to develop and to validate an HPLC method couple with a UV detection for determination of rifampicin in spiked human plasma and its application for bioequivalence study. The chromatographic separation was achieved on an RP-C18 column (LachromHitachi, 250 x 4.6 mm., 5μm), utilizing a mobile phase of phosphate buffer/acetonitrile (55:45, v/v, pH 6.8 ± 0.1) at a flow of 1.5 mL/min. Detection was carried out at 337 nm by using spectrophotometer. The developed method was statistically validated for the linearity, accuracy, limit of detection, limit of quantitation, precise and specifity. The specifity of the method was ascertained by comparing chromatograms of blank plasma and plasma containing rifampicin; the matrix and rifampicin were well separated. The limit of detection and limit of quantification were 0.7 µg/mL and 2.3 µg/mL, respectively. The regression curve of standard was linear (r > 0.999) over a range concentration of 20.0 – 100.0 µg/mL. The mean recovery of the method was 96.68 ± 8.06 %. Both intraday and interday precision data showed reproducibility (R.S.D. 2.98% and 1.13 %, respectively). Therefore, the method can be used for routine analysis of rifampicin in human plasma and in bioequivalence study. The validated method was successfully applied in pharmacokinetic and bioequivalence study of rifampicin tablet in a limited number of subjects (under an Ethical Clearance No. KE/FK/6201/EC/2015). The mean values of Cmax, Tmax, AUC(0-24) and AUC(o-∞) for the test formulation of rifampicin were 5.81 ± 0.88 µg/mL, 1.25 hour, 29.16 ± 4.05 µg/mL. h. and 29.41 ± 4.07 µg/mL. h., respectively. Meanwhile for the reference formulation, the values were 5.04 ± 0.54 µg/mL, 1.31 hour, 27.20 ± 3.98 µg/mL.h. and 27.49 ± 4.01 µg/mL.h. From bioequivalence study, the 90% CIs for the test formulation/reference formulation ratio for the logarithmic transformations of Cmax and AUC(0-24) were 97.96-129.48% and 99.13-120.02%, respectively. According to the bioequivamence test guidelines of the European Commission-European Medicines Agency, it can be concluded that the test formulation of rifampicin is bioequivalence with the reference formulation.

Keywords: validation, HPLC, plasma, bioequivalence

Procedia PDF Downloads 284
1069 The Moderating Roles of Bedtime Activities and Anxiety and Depression in the Relationship between Attention-Deficit/Hyperactivity Disorder and Sleep Problems in Children

Authors: Lian Tong, Yan Ye, Qiong Yan

Abstract:

Background: Children with attention-deficit/hyperactivity disorder (ADHD) often experience sleep problems, but the comorbidity mechanism has not been sufficiently studied. This study aimed to determine the comorbidity of ADHD and sleep problems as well as the moderating effects of bedtime activities and depression/anxiety symptoms on the relationship between ADHD and sleep problems. Methods: We recruited 934 primary students from third to fifth grade and their parents by stratified random sampling from three primary schools in Shanghai, China. This study used parent-reported versions of the ADHD Rating Scale-IV, Children’s Sleep Habits Questionnaire, and Achenbach Child Behavior Checklist. We used hierarchical linear regression analysis to clarify the moderating effects of bedtime activities and depression/anxiety symptoms. Results: We found that children with more ADHD symptoms had shorter sleep durations and more sleep problems on weekdays. Screen time before bedtime strengthened the relationship between ADHD and sleep-disordered breathing. Children with more screen time were more likely to have sleep onset delay, while those with less screen time had more sleep onset problems with increasing ADHD symptoms. The high bedtime eating group experienced more night waking with increasing ADHD symptoms compared with the low bedtime eating group. Anxiety/depression exacerbated total sleep problems and further interacted with ADHD symptoms to predict sleep length and sleep duration problems. Conclusions: Bedtime activities and emotional problems had important moderating effects on the relationship between ADHD and sleep problems. These findings indicate that appropriate bedtime management and emotional management may reduce sleep problems and improve sleep duration for children with ADHD symptoms.

Keywords: ADHD, sleep problems, anxiety/depression, bedtime activities, children

Procedia PDF Downloads 196