Search results for: final assembly
1763 QSAR and Anti-Depressant Studies of Some Novel Phenothiazine Derivatives
Authors: D. L. Tambe, S. Dighe Nachiket
Abstract:
Objective: Depression is a common but serious illness and the phenothiazine derivatives shows prominent effect against the depression hence work was undertaken to validate this use scientifically. Material and Methods: Synthesis of phenothiazine derivatives are done by the substitution of various groups, but the basic scheme of synthesis is started with synthesis of 4-(Cyclohexylidene) Benzoic acid using PABA. After that with the further six step of synthesis of 3-(10H-phenothiazin-2-yl)-N, 5-diphenyl-4H-1, 2, 4-triazol-4-amine is done which is final product. Antidepressant activity of all the synthesized compounds was evaluated by despair swim test by using Sprague Dawley Rats. Standard drug imipramine was used as the control. In the despair swim test, all the synthesized derivatives showed antidepressant activity. Results: Among the all phenothiazine derivatives four compounds (6.6-7.2 (14H –phenyl ), 9.43 (1H OH), 8.50 (1H NH phenothiazine),6.85-8.21(14H phenyl), 8.50 (1H NH phenothiazine), 11.82 (1H – OH), 6.6-7.2 (8H –phenyl ), 9.43 (1H OH), 8.50 (1H NH phenothiazine), 4.2 (1H NH) and 6.85-8.21(8H phenyl), 8.50 (1H NH phenothiazine), 3.9 (1H NH) 11.82 (1H – OH) showed significant antidepressant activity comparing with control drug imipramine. Conclusion: Various Novel phenothiazine derivatives show more potent antidepressant activity and it plays more beneficial role in human health for the treatment of depression.Keywords: antidepressant activities, despair swim test, phenothiazine, Sprague Dawley Rats
Procedia PDF Downloads 3821762 Non-Standard Monetary Policy Measures and Their Consequences
Authors: Aleksandra Nocoń (Szunke)
Abstract:
The study is a review of the literature concerning the consequences of non-standard monetary policy, which are used by central banks during unconventional periods, threatening instability of the banking sector. In particular, the attention was paid to the effects of non-standard monetary policy tools for financial markets. However, the empirical evidence about their effects and real consequences for the financial markets are still not final. The main aim of the study is to survey the consequences of standard and non-standard monetary policy instruments, implemented during the global financial crisis in the United States, United Kingdom and Euroland, with particular attention to the results for the stabilization of global financial markets. The study analyses the consequences for short and long-term market interest rates, interbank interest rates and LIBOR-OIS spread. The study consists mainly of the empirical review, indicating the impact of the implementation of these tools for the financial markets. The following research methods were used in the study: literature studies, including domestic and foreign literature, cause and effect analysis and statistical analysis.Keywords: asset purchase facility, consequences of monetary policy instruments, non-standard monetary policy, quantitative easing
Procedia PDF Downloads 3311761 Evaluation of Bearing Capacity of Vertically Loaded Strip Piled-Raft Embedded in Soft Clay
Authors: Seyed Abolhasan Naeini, Mohammad Hosseinzade
Abstract:
Settlement and bearing capacity of a piled raft are the two important issues for the foundations of the structures built on coastal areas from the geotechnical engineering point of view. Strip piled raft as a load carrying system could be used to reduce the possible extensive consolidation settlements and improve bearing capacity of structures in soft ground. The aim of this research was to evaluate the efficiency of strip piled raft embedded in soft clay. The efficiency of bearing capacity of strip piled raft foundation is evaluated numerically in two cases: in first case, the cap is placed directly on the ground surface and in the second, the cap is placed above the ground. Regarding to the fact that the geotechnical parameters of the soft clay are considered at low level, low bearing capacity is expected. The length, diameter and axe-to-axe distance of piles are the parameters which varied in this research to find out how they affect the bearing capacity. Results indicate that increasing the length and the diameter of the piles increase the bearing capacity. The complementary results will be presented in the final version of the paper.Keywords: soft clay, strip piled raft, bearing capacity, settlement
Procedia PDF Downloads 3071760 Analyzing Irbid’s Food Waste as Feedstock for Anaerobic Digestion
Authors: Assal E. Haddad
Abstract:
Food waste samples from Irbid were collected from 5 different sources for 12 weeks to characterize their composition in terms of four food categories; rice, meat, fruits and vegetables, and bread. Average food type compositions were 39% rice, 6% meat, 34% fruits and vegetables, and 23% bread. Methane yield was also measured for all food types and was found to be 362, 499, 352, and 375 mL/g VS for rice, meat, fruits and vegetables, and bread, respectively. A representative food waste sample was created to test the actual methane yield and compare it to calculated one. Actual methane yield (414 mL/g VS) was greater than the calculated value (377 mL/g VS) based on food type proportions and their specific methane yield. This study emphasizes the effect of the types of food and their proportions in food waste on the final biogas production. Findings in this study provide representative methane emission factors for Irbid’s food waste, which represent as high as 68% of total Municipal Solid Waste (MSW) in Irbid, and also indicate the energy and economic value within the solid waste stream in Irbid.Keywords: food waste, solid waste management, anaerobic digestion, methane yield
Procedia PDF Downloads 2041759 An Evolutionary Multi-Objective Optimization for Airport Gate Assignment Problem
Authors: Seyedmirsajad Mokhtarimousavi, Danial Talebi, Hamidreza Asgari
Abstract:
Gate Assignment Problem (GAP) is one of the most substantial issues in airport operation. In principle, GAP intends to maintain the maximum capacity of the airport through the best possible allocation of the resources (gates) in order to reach the optimum outcome. The problem involves a wide range of dependent and independent resources and their limitations, which add to the complexity of GAP from both theoretical and practical perspective. In this study, GAP was mathematically formulated as a three-objective problem. The preliminary goal of multi-objective formulation was to address a higher number of objectives that can be simultaneously optimized and therefore increase the practical efficiency of the final solution. The problem is solved by applying the second version of Non-dominated Sorting Genetic Algorithm (NSGA-II). Results showed that the proposed mathematical model could address most of major criteria in the decision-making process in airport management in terms of minimizing both airport/airline cost and passenger walking distance time. Moreover, the proposed approach could properly find acceptable possible answers.Keywords: airport management, gate assignment problem, mathematical modeling, genetic algorithm, NSGA-II
Procedia PDF Downloads 2991758 Human Mesenchymal Stem Cells as a Potential Source for Cell Therapy in Liver Disorders
Authors: Laila Montaser, Hala Gabr, Maha El-Bassuony, Gehan Tawfeek
Abstract:
Orthotropic liver transplantation (OLT) is the final procedure of both end stage and metabolic liver diseases. Hepatocyte transplantation is an alternative for OLT, but the sources of hepatocytes are limited. Bone marrow mesenchymal stem cells (BM-MSCs) can differentiate into hepatocyte-like cells and are a potential alternative source for hepatocytes. The MSCs from bone marrow are a promising target population as they are capable of differentiating along multiple lineages and, at least in vitro, have significant expansion capability. MSCs from bone marrow may have the potential to differentiate in vitro and in vivo into hepatocytes. Our study examined whether mesenchymal stem cells (MSCs), which are stem cells originated from human bone marrow, are able to differentiate into functional hepatocyte-like cells in vitro. Our aim was to investigate the differentiation potential of BM-MSCs into hepatocyte-like cells. Adult stem cell therapy could solve the problem of degenerative disorders, including liver disease.Keywords: bone marrow, differentiation, hepatocyte, stem cells
Procedia PDF Downloads 5201757 Achieving Environmentally Sustainable Supply Chain in Textile and Apparel Industries
Authors: Faisal Bin Alam
Abstract:
Most of the manufacturing entities cause negative footprint to nature that demand due attention. Textile industries have one of the longest supply chains and bear the liability of significant environmental impact to our planet. Issues of environmental safety, scarcity of energy and resources, and demand for eco-friendly products have driven research to search for safe and suitable alternatives in apparel processing. Consumer awareness, increased pressure from fashion brands and actions from local legislative authorities have somewhat been able to improve the practices. Objective of this paper is to reveal the best selection of raw materials and methods of production, taking environmental sustainability into account. Methodology used in this study is exploratory in nature based on personal experience, field visits in the factories of Bangladesh and secondary sources. Findings are limited to exploring better alternatives to conventional operations of a Readymade Garment manufacturing, from fibre selection to final product delivery, therefore showing some ways of achieving greener environment in the supply chain of a clothing industry.Keywords: textile and apparel, environmental sustainability, supply chain, production, clothing
Procedia PDF Downloads 1371756 School Autonomy in the United Kingdom: A Correlational Study Applied to English Principals
Authors: Pablo Javier Ortega-Rodriguez, Francisco Jose Pozuelos-Estrada
Abstract:
Recently, there has been a renewed interest in school autonomy in the United Kingdom and its impact on students' outcomes. English principals have a pivotal role in decision-making. The aim of this paper is to explore the correlation between the type of school (public or private) and the considerable responsibilities of English principals which participated in PISA 2015. The final sample consisted of 419 principals. Descriptive data (percentages and means) were generated for the variables related to professional autonomy. Pearson's chi-square test was used to determine if there is an association between the type of school and principals' responsibilities for relevant tasks. Statistical analysis was performed using SPSS software, version 22. Findings suggest a significant correlation between the type of school and principals' responsibility for firing teachers and formulating the school budget. This study confirms that the type of school is not associated with principals' responsibility for choosing which textbooks are used at school. The present study establishes a quantitative framework for defining four models of professional autonomy and some proposals to improve school autonomy in the United Kingdom.Keywords: decision making, principals, professional autonomy, school autonomy
Procedia PDF Downloads 7971755 Hybrid Anomaly Detection Using Decision Tree and Support Vector Machine
Authors: Elham Serkani, Hossein Gharaee Garakani, Naser Mohammadzadeh, Elaheh Vaezpour
Abstract:
Intrusion detection systems (IDS) are the main components of network security. These systems analyze the network events for intrusion detection. The design of an IDS is through the training of normal traffic data or attack. The methods of machine learning are the best ways to design IDSs. In the method presented in this article, the pruning algorithm of C5.0 decision tree is being used to reduce the features of traffic data used and training IDS by the least square vector algorithm (LS-SVM). Then, the remaining features are arranged according to the predictor importance criterion. The least important features are eliminated in the order. The remaining features of this stage, which have created the highest level of accuracy in LS-SVM, are selected as the final features. The features obtained, compared to other similar articles which have examined the selected features in the least squared support vector machine model, are better in the accuracy, true positive rate, and false positive. The results are tested by the UNSW-NB15 dataset.Keywords: decision tree, feature selection, intrusion detection system, support vector machine
Procedia PDF Downloads 2651754 Evaluation of Fusion Sonar and Stereo Camera System for 3D Reconstruction of Underwater Archaeological Object
Authors: Yadpiroon Onmek, Jean Triboulet, Sebastien Druon, Bruno Jouvencel
Abstract:
The objective of this paper is to develop the 3D underwater reconstruction of archaeology object, which is based on the fusion between a sonar system and stereo camera system. The underwater images are obtained from a calibrated camera system. The multiples image pairs are input, and we first solve the problem of image processing by applying the well-known filter, therefore to improve the quality of underwater images. The features of interest between image pairs are selected by well-known methods: a FAST detector and FLANN descriptor. Subsequently, the RANSAC method is applied to reject outlier points. The putative inliers are matched by triangulation to produce the local sparse point clouds in 3D space, using a pinhole camera model and Euclidean distance estimation. The SFM technique is used to carry out the global sparse point clouds. Finally, the ICP method is used to fusion the sonar information with the stereo model. The final 3D models have a précised by measurement comparing with the real object.Keywords: 3D reconstruction, archaeology, fusion, stereo system, sonar system, underwater
Procedia PDF Downloads 2991753 Project Paulina: A Human-Machine Interface for Individuals with Limited Mobility and Conclusions from Research and Development
Authors: Radoslaw Nagay
Abstract:
The Paulina Project aims to address the challenges faced by immobilized individuals, such as those with multiple sclerosis, muscle dystrophy, or spinal cord injuries, by developing a flexible hardware and software solution. This paper presents the research and development efforts of our team, which commenced in 2019 and is now in its final stage. Recognizing the diverse needs and limitations of individuals with limited mobility, we conducted in-depth testing with a group of 30 participants. The insights gained from these tests led to the complete redesign of the system. Our presentation covers the initial project ideas, observations from in-situ tests, and the newly developed system that is currently under construction. Moreover, in response to the financial constraints faced by many disabled individuals, we propose an affordable business model for the future commercialization of our invention. Through the Paulina Project, we strive to empower immobilized individuals, providing them with greater independence and improved quality of life.Keywords: UI, human-machine interface, social inclusion, multiple sclerosis, muscular dystrophy, spinal cord injury, quadriplegic
Procedia PDF Downloads 701752 Developing an Accurate AI Algorithm for Histopathologic Cancer Detection
Authors: Leah Ning
Abstract:
This paper discusses the development of a machine learning algorithm that accurately detects metastatic breast cancer (cancer has spread elsewhere from its origin part) in selected images that come from pathology scans of lymph node sections. Being able to develop an accurate artificial intelligence (AI) algorithm would help significantly in breast cancer diagnosis since manual examination of lymph node scans is both tedious and oftentimes highly subjective. The usage of AI in the diagnosis process provides a much more straightforward, reliable, and efficient method for medical professionals and would enable faster diagnosis and, therefore, more immediate treatment. The overall approach used was to train a convolution neural network (CNN) based on a set of pathology scan data and use the trained model to binarily classify if a new scan were benign or malignant, outputting a 0 or a 1, respectively. The final model’s prediction accuracy is very high, with 100% for the train set and over 70% for the test set. Being able to have such high accuracy using an AI model is monumental in regard to medical pathology and cancer detection. Having AI as a new tool capable of quick detection will significantly help medical professionals and patients suffering from cancer.Keywords: breast cancer detection, AI, machine learning, algorithm
Procedia PDF Downloads 911751 Influence of Cure Degree in GO and CNT-Epoxy Nanocomposites
Authors: Marina Borgert Moraes, Wesley Francisco, Filipe Vargas, Gilmar Patrocínio Thim
Abstract:
In recent years, carbon nanotubes (CNT) and graphene oxide (GO), especially the functionalized ones, have been added to epoxy resin in order to increase the mechanical, electrical and thermal properties of nanocomposites. However, it's still unknown how the presence of these nanoparticles influences the curing process and the final mechanical properties as well. In this work, kinetic and mechanical properties of the nanocomposites were analyzed, where the kinetic process was followed by DSC and the mechanical properties by DMA. Initially, CNT was annealed at high temperature (1800 °C) under vacuum atmosphere, followed by a chemical treatment using acids and ethylenediamine. GO was synthesized through chemical route, washed clean, dried and ground to #200. The presence of functional groups on CNT and GO surface was confirmed by XPS spectra and FT-IR. Then, epoxy resin, nanoparticles and acetone were mixed by sonication in order to obtain the composites. DSC analyses were performed on samples with different curing cycles (1h 80°C + 2h 120°C; 3h 80°C + 2h 120°C; 5h 80°C) and samples with different times at constant temperature (120°C). Results showed that the kinetic process and the mechanical strength are very dependent on the presence of graphene and functionalized-CNT in the nanocomposites.Keywords: carbon nanotube, epoxy resin, Graphene oxide, nanocomposite
Procedia PDF Downloads 3181750 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning
Authors: Yanwen Li, Shuguo Xie
Abstract:
In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.Keywords: gradient image, segmentation and extract, mean-shift algorithm, dictionary iearning
Procedia PDF Downloads 2671749 Inadequate Requirements Engineering Process: A Key Factor for Poor Software Development in Developing Nations: A Case Study
Authors: K. Adu Michael, K. Alese Boniface
Abstract:
Developing a reliable and sustainable software products is today a big challenge among up–coming software developers in Nigeria. The inability to develop a comprehensive problem statement needed to execute proper requirements engineering process is missing. The need to describe the ‘what’ of a system in one document, written in a natural language is a major step in the overall process of Software Engineering. Requirements Engineering is a process use to discover, analyze and validate system requirements. This process is needed in reducing software errors at the early stage of the development of software. The importance of each of the steps in Requirements Engineering is clearly explained in the context of using detailed problem statement from client/customer to get an overview of an existing system along with expectations from the new system. This paper elicits inadequate Requirements Engineering principle as the major cause of poor software development in developing nations using a case study of final year computer science students of a tertiary-education institution in Nigeria.Keywords: client/customer, problem statement, requirements engineering, software developers
Procedia PDF Downloads 4071748 Three-Stage Mining Metals Supply Chain Coordination and Product Quality Improvement with Revenue Sharing Contract
Authors: Hamed Homaei, Iraj Mahdavi, Ali Tajdin
Abstract:
One of the main concerns of miners is to increase the quality level of their products because the mining metals price depends on their quality level; however, increasing the quality level of these products has different costs at different levels of the supply chain. These costs usually increase after extractor level. This paper studies the coordination issue of a decentralized three-level supply chain with one supplier (extractor), one mineral processor and one manufacturer in which the increasing product quality level cost at the processor level is higher than the supplier and at the level of the manufacturer is more than the processor. We identify the optimal product quality level for each supply chain member by designing a revenue sharing contract. Finally, numerical examples show that the designed contract not only increases the final product quality level but also provides a win-win condition for all supply chain members and increases the whole supply chain profit.Keywords: three-stage supply chain, product quality improvement, channel coordination, revenue sharing
Procedia PDF Downloads 1831747 Effect of Fly Ash Fineness on Sorption Properties of Geopolymers Based on Liquid Glass
Authors: Miroslava Zelinkova, Marcela Ondova
Abstract:
Fly ash (FA) thanks to the significant presence of SiO2 and Al2O3 as the main components is a potential raw material for geopolymers production. Mechanical activation is a method for improving FA reactivity and also the porosity of final mixture; those parameters can be analysed through sorption properties. They have direct impact on the durability of fly ash based geopolymer mortars. In the paper, effect of FA fineness on sorption properties of geopolymers based on sodium silicate, as well as relationship between fly ash fineness and apparent density, compressive and flexural strength of geopolymers are presented. The best results in the evaluated area reached the sample H1, which contents the highest portion of particle under 20μm (100% of GFA). The interdependence of individual tested properties was confirmed for geopolymer mixtures corresponding to those in the cement based mixtures: higher is portion of fine particles < 20μm, higher is strength, density and lower are sorption properties. The compressive strength as well as sorption parameters of the geopolymer can be reasonably controlled by grinding process and also ensured by the higher share of fine particle (to 20μm) in total mass of the material.Keywords: alkali activation, geopolymers, fly ash, particle fineness
Procedia PDF Downloads 2211746 A Strategic Partner Evaluation Model for the Project Based Enterprises
Authors: Woosik Jang, Seung H. Han
Abstract:
The optimal partner selection is one of the most important factors to pursue the project’s success. However, in practice, there is a gaps in perception of success depending on the role of the enterprises for the projects. This frequently makes a relations between the partner evaluation results and the project’s final performances, insufficiently. To meet this challenges, this study proposes a strategic partner evaluation model considering the perception gaps between enterprises. A total 3 times of survey was performed; factor selection, perception gap analysis, and case application. After then total 8 factors are extracted from independent sample t-test and Borich model to set-up the evaluation model. Finally, through the case applications, only 16 enterprises are re-evaluated to “Good” grade among the 22 “Good” grade from existing model. On the contrary, 12 enterprises are re-evaluated to “Good” grade among the 19 “Bad” grade from existing model. Consequently, the perception gaps based evaluation model is expected to improve the decision making quality and also enhance the probability of project’s success.Keywords: partner evaluation model, project based enterprise, decision making, perception gap, project performance
Procedia PDF Downloads 1571745 Viscoelastic Separation and Concentration of Candida Using a Low Aspect Ratio Microchannel
Authors: Seonggil Kim, Jeonghun Nam, Chae Seung Lim
Abstract:
Rapid diagnosis of fungal infections is critical for rapid antifungal therapy. However, it is difficult to detect extremely low concentration fungi in blood sample. To address the limitation, separation and concentration of fungi in blood sample are required to enhance the sensitivity of PCR analysis. In this study, we demonstrated a sheathless separation and concentration of fungi, candida cells using a viscoelastic fluid. To validate the performance of the device, microparticle mixture (2 and 13 μm) was used, and those particles were successfully separated based on the size difference at high flow rate of 100 μl/min. For the final application, successful separation of the Candida cells from the white blood cells (WBCs) was achieved. Based on the viscoelastic lateral migration toward the equilibrium position, Candida cells were separated and concentrated by center focusing, while WBCs were removed by patterning into two streams between the channel center and the sidewalls. By flow cytometric analysis, the separation efficiency and the purity were evaluated as ~99% and ~ 97%, respectively. From the results, the device can be the powerful tool for detecting extremely rare disease-related cells.Keywords: candida cells, concentration, separation, viscoelastic fluid
Procedia PDF Downloads 1981744 Unzipping the Stress Response Genes in Moringa oleifera Lam. through Transcriptomics
Authors: Vivian A. Panes, Raymond John S. Rebong, Miel Q. Diaz
Abstract:
Moringa oleifera Lam. is known mainly for its high nutritional value and medicinal properties contributing to its popular reputation as a 'miracle plant' in the tropical climates where it usually grows. The main objective of this study is to discover the genes and gene products involved in abiotic stress-induced activity that may impact the M. oleifera Lam. mature seeds as well as their corresponding functions. In this study, RNA-sequencing and de novo transcriptome assembly were performed using two assemblers, Trinity and Oases, which produced 177,417 and 120,818 contigs respectively. These transcripts were then subjected to various bioinformatics tools such as Blast2GO, UniProt, KEGG, and COG for gene annotation and the analysis of relevant metabolic pathways. Furthermore, FPKM analysis was performed to identify gene expression levels. The sequences were filtered according to the 'response to stress' GO term since this study dealt with stress response. Clustered Orthologous Groups (COG) showed that the highest frequencies of stress response gene functions were those of cytoskeleton which make up approximately 14% and 23% of stress-related sequences under Trinity and Oases respectively, recombination, repair and replication at 11% and 14% respectively, carbohydrate transport and metabolism at 23% and 9% respectively and defense mechanisms 16% and 12% respectively. KEGG pathway analysis determined the most abundant stress-response genes in the phenylpropanoid biosynthesis at counts of 187 and 166 pathways for Oases and Trinity respectively, purine metabolism at 123 and 230 pathways, and biosynthesis of antibiotics at 105 and 102. Unique and cumulative GO term counts revealed that majority of the stress response genes belonged to the category of cellular response to stress at cumulative counts of 1,487 to 2,187 for Oases and Trinity respectively, defense response at 754 and 1,255, and response to heat at 213 and 208, response to water deprivation at 229 and 228, and oxidative stress at 508 and 488. Lastly, FPKM was used to determine the levels of expression of each stress response gene. The most upregulated gene encodes for thiamine thiazole synthase chloroplastic-like enzyme which plays a significant role in DNA damage tolerance. Data analysis implies that M. oleifera stress response genes are directed towards the effects of climate change more than other stresses indicating the potential of M. oleifera for cultivation in harsh environments because it is resistant to climate change, pathogens, and foreign invaders.Keywords: stress response, genes, Moringa oleifera, transcriptomics
Procedia PDF Downloads 1471743 A Computational Fluid Dynamics Simulation of Single Rod Bundles with 54 Fuel Rods without Spacers
Authors: S. K. Verma, S. L. Sinha, D. K. Chandraker
Abstract:
The Advanced Heavy Water Reactor (AHWR) is a vertical pressure tube type, heavy water moderated and boiling light water cooled natural circulation based reactor. The fuel bundle of AHWR contains 54 fuel rods arranged in three concentric rings of 12, 18 and 24 fuel rods. This fuel bundle is divided into a number of imaginary interacting flow passage called subchannels. Single phase flow condition exists in reactor rod bundle during startup condition and up to certain length of rod bundle when it is operating at full power. Prediction of the thermal margin of the reactor during startup condition has necessitated the determination of the turbulent mixing rate of coolant amongst these subchannels. Thus, it is vital to evaluate turbulent mixing between subchannels of AHWR rod bundle. With the remarkable progress in the computer processing power, the computational fluid dynamics (CFD) methodology can be useful for investigating the thermal–hydraulic characteristics phenomena in the nuclear fuel assembly. The present report covers the results of simulation of pressure drop, velocity variation and turbulence intensity on single rod bundle with 54 rods in circular arrays. In this investigation, 54-rod assemblies are simulated with ANSYS Fluent 15 using steady simulations with an ANSYS Workbench meshing. The simulations have been carried out with water for Reynolds number 9861.83. The rod bundle has a mean flow area of 4853.0584 mm2 in the bare region with the hydraulic diameter of 8.105 mm. In present investigation, a benchmark k-ε model has been used as a turbulence model and the symmetry condition is set as boundary conditions. Simulation are carried out to determine the turbulent mixing rate in the simulated subchannels of the reactor. The size of rod and the pitch in the test has been same as that of actual rod bundle in the prototype. Water has been used as the working fluid and the turbulent mixing tests have been carried out at atmospheric condition without heat addition. The mean velocity in the subchannel has been varied from 0-1.2 m/s. The flow conditions are found to be closer to the actual reactor condition.Keywords: AHWR, CFD, single-phase turbulent mixing rate, thermal–hydraulic
Procedia PDF Downloads 3201742 Improved Intracellular Protein Degradation System for Rapid Screening and Quantitative Study of Essential Fungal Proteins in Biopharmaceutical Development
Authors: Patarasuda Chaisupa, R. Clay Wright
Abstract:
The selection of appropriate biomolecular targets is a crucial aspect of biopharmaceutical development. The Auxin-Inducible Degron Degradation (AID) technology has demonstrated remarkable potential in efficiently and rapidly degrading target proteins, thereby enabling the identification and acquisition of drug targets. The AID system also offers a viable method to deplete specific proteins, particularly in cases where the degradation pathway has not been exploited or when the adaptation of proteins, including the cell environment, occurs to compensate for the mutation or gene knockout. In this study, we have engineered an improved AID system tailored to deplete proteins of interest. This AID construct combines the auxin-responsive E3 ubiquitin ligase binding domain, AFB2, and the substrate degron, IAA17, fused to the target genes. Essential genes of fungi with the lowest percent amino acid similarity to human and plant orthologs, according to the Basic Local Alignment Search Tool (BLAST), were cloned into the AID construct in S. cerevisiae (AID-tagged strains) using a modular yeast cloning toolkit for multipart assembly and direct genetic modification. Each E3 ubiquitin ligase and IAA17 degron was fused to a fluorescence protein, allowing for real-time monitoring of protein levels in response to different auxin doses via cytometry. Our AID system exhibited high sensitivity, with an EC50 value of 0.040 µM (SE = 0.016) for AFB2, enabling the specific promotion of IAA17::target protein degradation. Furthermore, we demonstrate how this improved AID system enhances quantitative functional studies of various proteins in fungi. The advancements made in auxin-inducible protein degradation in this study offer a powerful approach to investigating critical target protein viability in fungi, screening protein targets for drugs, and regulating intracellular protein abundance, thus revolutionizing the study of protein function underlying a diverse range of biological processes.Keywords: synthetic biology, bioengineering, molecular biology, biotechnology
Procedia PDF Downloads 921741 Novel Routes to the Synthesis and Functionalization of Metallic and Semiconductor Thin Film and Nanoparticles
Authors: Hanan. Al Chaghouri, Mohammad Azad Malik, P. John Thomas, Paul O’Brien
Abstract:
The process of assembling metal nanoparticles at the interface of two liquids has received a great deal of attention over the past few years due to a wide range of important applications and their unusual properties as compared to bulk materials. We present a low cost, simple and cheap synthesis of metal nanoparticles, core/shell structures and semiconductors followed by assembly of these particles between immiscible liquids. The aim of this talk is divided to three parts: Firstly, to describe the achievement of a closed loop recycling for producing cadmium sulfide as powders and/or nanostructured thin films for solar cells or other optoelectronic devices applications by using a different chain length of commercially available secondary amines of dithiocarbamato complexes. The approach can be extended to other metal sulfides such as those of Zn, Pb, Cu, or Fe and many transition metals and oxides. Secondly, to synthesis significantly cheaper magnetic particles suited for the mass market. Ni/NiO nanoparticles with ferromagnetic properties at room temperature were among the smallest and strongest magnets (5 nm) were made in solution. The applications of this work can be to produce viable storage devices and the other possibility is to disperse these nanocrystals in solution and use it to make ferrofluids which have a number of mature applications. The third part is about preparing and assembling of submicron silver, cobalt and nickel particles by using polyol methods and liquid/liquid interface, respectively. Coinage metals like gold, copper and silver are suitable for plasmonic thin film solar cells because of their low resistivity and strong interactions with visible light waves. Silver is the best choice for solar cell application since it has low absorption losses and high radiative efficiency compared to gold and copper. Assembled cobalt and nickel as films are promising for spintronic, magnetic and magneto-electronic and biomedics.Keywords: metal nanoparticles, core/shell structures and semiconductors, ferromagnetic properties, closed loop recycling, liquid/liquid interface
Procedia PDF Downloads 4591740 Effectiveness of an Intervention to Increase Physics Students' STEM Self-Efficacy: Results of a Quasi-Experimental Study
Authors: Stephanie J. Sedberry, William J. Gerace, Ian D. Beatty, Michael J. Kane
Abstract:
Increasing the number of US university students who attain degrees in STEM and enter the STEM workforce is a national priority. Demographic groups vary in their rates of participation in STEM, and the US produces just 10% of the world’s science and engineering degrees (2014 figures). To address these gaps, we have developed and tested a practical, 30-minute, single-session classroom-based intervention to improve students’ self-efficacy and academic performance in University STEM courses. Self-efficacy is a psychosocial construct that strongly correlates with academic success. Self-efficacy is a construct that is internal and relates to the social, emotional, and psychological aspects of student motivation and performance. A compelling body of research demonstrates that university students’ self-efficacy beliefs are strongly related to their selection of STEM as a major, aspirations for STEM-related careers, and persistence in science. The development of an intervention to increase students’ self-efficacy is motivated by research showing that short, social-psychological interventions in education can lead to large gains in student achievement. Our intervention addresses STEM self-efficacy via two strong, but previously separate, lines of research into attitudinal/affect variables that influence student success. The first is ‘attributional retraining,’ in which students learn to attribute their successes and failures to internal rather than external factors. The second is ‘mindset’ about fixed vs. growable intelligence, in which students learn that the brain remains plastic throughout life and that they can, with conscious effort and attention to thinking skills and strategies, become smarter. Extant interventions for both of these constructs have significantly increased academic performance in the classroom. We developed a 34-item questionnaire (Likert scale) to measure STEM Self-efficacy, Perceived Academic Control, and Growth Mindset in a University STEM context, and validated it with exploratory factor analysis, Rasch analysis, and multi-trait multi-method comparison to coded interviews. Four iterations of our 42-week research protocol were conducted across two academic years (2017-2018) at three different Universities in North Carolina, USA (UNC-G, NC A&T SU, and NCSU) with varied student demographics. We utilized a quasi-experimental prospective multiple-group time series research design with both experimental and control groups, and we are employing linear modeling to estimate the impact of the intervention on Self-Efficacy,wth-Mindset, Perceived Academic Control, and final course grades (performance measure). Preliminary results indicate statistically significant effects of treatment vs. control on Self-Efficacy, Growth-Mindset, Perceived Academic Control. Analyses are ongoing and final results pending. This intervention may have the potential to increase student success in the STEM classroom—and ownership of that success—to continue in a STEM career. Additionally, we have learned a great deal about the complex components and dynamics of self-efficacy, their link to performance, and the ways they can be impacted to improve students’ academic performance.Keywords: academic performance, affect variables, growth mindset, intervention, perceived academic control, psycho-social variables, self-efficacy, STEM, university classrooms
Procedia PDF Downloads 1271739 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars
Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic
Abstract:
Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing
Procedia PDF Downloads 3581738 Effect of Some Metal Ions on the Activity of Lipase Produced by Aspergillus Niger Cultured on Vitellaria Paradoxa Shells
Authors: Abdulhakeem Sulyman, Olukotun Zainab, Hammed Abdulquadri
Abstract:
Lipases (triacylglycerol acyl hydrolases) (EC 3.1.1.3) are class of enzymes that catalyses the hydrolysis of triglycerides to glycerol and free fatty acids. They account for up to 10% of the enzyme in the market and have a wide range of applications in biofuel production, detergent formulation, leather processing and in food and feed processing industry. This research was conducted to study the effect of some metal ions on the activity of purified lipase produced by Aspergillus niger cultured on Vitellaria paradoxa shells. Purified lipase in 12.5 mM p-NPL was incubated with different metal ions (Zn²⁺, Ca²⁺, Mn²⁺, Fe²⁺, Na⁺, K⁺ and Mg²⁺). The final concentrations of metal ions investigated were 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 and 1.0 mM. The results obtained from the study showed that Zn²⁺, Ca²⁺, Mn²⁺ and Fe²⁺ ions increased the activity of lipase up to 3.0, 3.0, 1.0, and 26.0 folds respectively. Lipase activity was partially inhibited by Na⁺ and Mg²⁺ with up to 88.5% and 83.7% loss of activity respectively. Lipase activity was also inhibited by K⁺ with up to 56.7% loss in the activity as compared to in the absence of metal ions. The study concluded that lipase produced by Aspergillus niger cultured on Vitellaria paradoxa shells can be activated by the presence of Zn²⁺, Ca²⁺, Mn²⁺ and Fe²⁺ and inhibited by Na⁺, K⁺ and Mg²⁺.Keywords: Aspergillus niger, Vitellaria paradoxa, lipase, metal ions
Procedia PDF Downloads 1521737 A Comparison of Kinetic and Mechanical Properties between Graphene Oxide (GO) and Carbon Nanotubes (CNT)-Epoxy Nanocomposites
Authors: Marina Borgert Moraes, Gilmar Patrocinio Thim
Abstract:
It is still unknown how the presence of nanoparticles such as graphene oxide (GO) or carbon nanotubes (CNT) influence the curing process and the final mechanical properties as well. In this work, kinetic and mechanical properties of the nanocomposites were analyzed, where the kinetic process was followed by DSC and the mechanical properties by DMA as well as mechanical tests. Initially, CNT was annealed at high temperature (1800 °C) under vacuum atmosphere, followed by a chemical treatment using acids and ethylenediamine. GO was synthesized through chemical route, washed clean, dried and ground to #200. The presence of functional groups on CNT and GO surface was confirmed by XPS spectra and FT-IR. Then, nanoparticles and acetone were mixed by sonication in order to obtain the composites. DSC analyses were performed on samples with different curing cycles (1h 80 °C + 2h 120 °C; 3h 80 °C + 2h 120 °C; 5h 80 °C) and samples with different times at constant temperature (120 °C). Mechanical tests were performed according to ASTM D638 and D790. Results showed that the kinetic process and the mechanical strength are very dependent on the presence of graphene and functionalized-CNT in the nanocomposites, and the GO reinforced samples had a slightly bigger improvement compared to functionalized CNT.Keywords: carbon nanotube, epoxy resin, graphene oxide, nanocomposite
Procedia PDF Downloads 2621736 Multi-Level Attentional Network for Aspect-Based Sentiment Analysis
Authors: Xinyuan Liu, Xiaojun Jing, Yuan He, Junsheng Mu
Abstract:
Aspect-based Sentiment Analysis (ABSA) has attracted much attention due to its capacity to determine the sentiment polarity of the certain aspect in a sentence. In previous works, great significance of the interaction between aspect and sentence has been exhibited in ABSA. In consequence, a Multi-Level Attentional Networks (MLAN) is proposed. MLAN consists of four parts: Embedding Layer, Encoding Layer, Multi-Level Attentional (MLA) Layers and Final Prediction Layer. Among these parts, MLA Layers including Aspect Level Attentional (ALA) Layer and Interactive Attentional (ILA) Layer is the innovation of MLAN, whose function is to focus on the important information and obtain multiple levels’ attentional weighted representation of aspect and sentence. In the experiments, MLAN is compared with classical TD-LSTM, MemNet, RAM, ATAE-LSTM, IAN, AOA, LCR-Rot and AEN-GloVe on SemEval 2014 Dataset. The experimental results show that MLAN outperforms those state-of-the-art models greatly. And in case study, the works of ALA Layer and ILA Layer have been proven to be effective and interpretable.Keywords: deep learning, aspect-based sentiment analysis, attention, natural language processing
Procedia PDF Downloads 1381735 Optimal Beam for Accelerator Driven Systems
Authors: M. Paraipan, V. M. Javadova, S. I. Tyutyunnikov
Abstract:
The concept of energy amplifier or accelerator driven system (ADS) involves the use of a particle accelerator coupled with a nuclear reactor. The accelerated particle beam generates a supplementary source of neutrons, which allows the subcritical functioning of the reactor, and consequently a safe exploitation. The harder neutron spectrum realized ensures a better incineration of the actinides. The almost generalized opinion is that the optimal beam for ADS is represented by protons with energy around 1 GeV (gigaelectronvolt). In the present work, a systematic analysis of the energy gain for proton beams with energy from 0.5 to 3 GeV and ion beams from deuteron to neon with energies between 0.25 and 2 AGeV is performed. The target is an assembly of metallic U-Pu-Zr fuel rods in a bath of lead-bismuth eutectic coolant. The rods length is 150 cm. A beryllium converter with length 110 cm is used in order to maximize the energy released in the target. The case of a linear accelerator is considered, with a beam intensity of 1.25‧10¹⁶ p/s, and a total accelerator efficiency of 0.18 for proton beam. These values are planned to be achieved in the European Spallation Source project. The energy gain G is calculated as the ratio between the energy released in the target to the energy spent to accelerate the beam. The energy released is obtained through simulation with the code Geant4. The energy spent is calculating by scaling from the data about the accelerator efficiency for the reference particle (proton). The analysis concerns the G values, the net power produce, the accelerator length, and the period between refueling. The optimal energy for proton is 1.5 GeV. At this energy, G reaches a plateau around a value of 8 and a net power production of 120 MW (megawatt). Starting with alpha, ion beams have a higher G than 1.5 GeV protons. A beam of 0.25 AGeV(gigaelectronvolt per nucleon) ⁷Li realizes the same net power production as 1.5 GeV protons, has a G of 15, and needs an accelerator length 2.6 times lower than for protons, representing the best solution for ADS. Beams of ¹⁶O or ²⁰Ne with energy 0.75 AGeV, accelerated in an accelerator with the same length as 1.5 GeV protons produce approximately 900 MW net power, with a gain of 23-25. The study of the evolution of the isotopes composition during irradiation shows that the increase in power production diminishes the period between refueling. For a net power produced of 120 MW, the target can be irradiated approximately 5000 days without refueling, but only 600 days when the net power reaches 1 GW (gigawatt).Keywords: accelerator driven system, ion beam, electrical power, energy gain
Procedia PDF Downloads 1401734 Inverted Geometry Ceramic Insulators in High Voltage Direct Current Electron Guns for Accelerators
Authors: C. Hernandez-Garcia, P. Adderley, D. Bullard, J. Grames, M. A. Mamun, G. Palacios-Serrano, M. Poelker, M. Stutzman, R. Suleiman, Y. Wang, , S. Zhang
Abstract:
High-energy nuclear physics experiments performed at the Jefferson Lab (JLab) Continuous Electron Beam Accelerator Facility require a beam of spin-polarized ps-long electron bunches. The electron beam is generated when a circularly polarized laser beam illuminates a GaAs semiconductor photocathode biased at hundreds of kV dc inside an ultra-high vacuum chamber. The photocathode is mounted on highly polished stainless steel electrodes electrically isolated by means of a conical-shape ceramic insulator that extends into the vacuum chamber, serving as the cathode electrode support structure. The assembly is known as a dc photogun, which has to simultaneously meet the following criteria: high voltage to manage space charge forces within the electron bunch, ultra-high vacuum conditions to preserve the photocathode quantum efficiency, no field emission to prevent gas load when field emitted electrons impact the vacuum chamber, and finally no voltage breakdown for robust operation. Over the past decade, JLab has tested and implemented the use of inverted geometry ceramic insulators connected to commercial high voltage cables to operate a photogun at 200kV dc with a 10 cm long insulator, and a larger version at 300kV dc with 20 cm long insulator. Plans to develop a third photogun operating at 400kV dc to meet the stringent requirements of the proposed International Linear Collider are underway at JLab, utilizing even larger inverted insulators. This contribution describes approaches that have been successful in solving challenging problems related to breakdown and field emission, such as triple-point junction screening electrodes, mechanical polishing to achieve mirror-like surface finish and high voltage conditioning procedures with Kr gas to extinguish field emission.Keywords: electron guns, high voltage techniques, insulators, vacuum insulation
Procedia PDF Downloads 113