Search results for: energy source
1503 Regulation Aspects for a Radioisotope Production Installation in Brazil
Authors: Rian O. Miranda, Lidia V. de Sa, Julio C. Suita
Abstract:
The Brazilian Nuclear Energy Commission (CNEN) is the main manufacturer of radiopharmaceuticals in Brazil. The Nuclear Engineering Institute (IEN), located at Rio de Janeiro, is one of its main centers of research and production, attending public and private hospitals in the state. This radiopharmaceutical production is used in diagnostic and therapy procedures and allows one and a half million nuclear medicine procedures annually. Despite this, the country is not self-sufficient to meet national demand, creating the need for importation and consequent dependence on other countries. However, IEN facilities were designed in the 60's, and today its structure is inadequate in relation to the good manufacturing practices established by sanitary regulator (ANVISA) and radiological protection leading to the need for a new project. In order to adapt and increase production in the country, a new plant will be built and integrated to the existing facilities with a new 30 MeV Cyclotron that is actually in project detailing process. Thus, it is proposed to survey current CNEN and ANVISA standards for radiopharmaceutical production facilities, as well as the radiological protection analysis of each area of the plant, following good manufacturing practices recommendations adopted nationally besides licensing exigencies for radioactive facilities. In this way, the main requirements for proper operation, equipment location, building materials, area classification, and maintenance program have been implemented. The access controls, interlocks, segregation zones and pass-through boxes integrated into the project were also analyzed. As a result, IEN will in future have the flexibility to produce all necessary radioisotopes for nuclear medicine application, more efficiently by simultaneously bombarding two targets, allowing the simultaneous production of two different radioisotopes, minimizing radiation exposure and saving operating costs.Keywords: cyclotron, legislation, norms, production, radiopharmaceuticals
Procedia PDF Downloads 1351502 Effects of Roughness on Forward Facing Step in an Open Channel
Authors: S. M. Rifat, André L. Marchildon, Mark F. Tachie
Abstract:
Experiments were performed to investigate the effects of roughness on the reattachment and redevelopment regions over a 12 mm forward facing step (FFS) in an open channel flow. The experiments were performed over an upstream smooth wall and a smooth FFS, an upstream wall coated with sandpaper 36 grit and a smooth FFS and an upstream rough wall produced from sandpaper 36 grit and a FFS coated with sandpaper 36 grit. To investigate only the wall roughness effects, Reynolds number, Froude number, aspect ratio and blockage ratio were kept constant. Upstream profiles showed reduced streamwise mean velocities close to the rough wall compared to the smooth wall, but the turbulence level was increased by upstream wall roughness. The reattachment length for the smooth-smooth wall experiment was 1.78h; however, when it is replaced with rough-smooth wall the reattachment length decreased to 1.53h. It was observed that the upstream roughness increased the physical size of contours of maximum turbulence level; however, the downstream roughness decreased both the size and magnitude of contours in the vicinity of the leading edge of the step. Quadrant analysis was performed to investigate the dominant Reynolds shear stress contribution in the recirculation region. The Reynolds shear stress and turbulent kinetic energy profiles after the reattachment showed slower recovery compared to the streamwise mean velocity, however all the profiles fairly collapse on their corresponding upstream profiles at x/h = 60. It was concluded that to obtain a complete collapse several more streamwise distances would be required.Keywords: forward facing step, open channel, separated and reattached turbulent flows, wall roughness
Procedia PDF Downloads 3851501 Geometrical Analysis of an Atheroma Plaque in Left Anterior Descending Coronary Artery
Authors: Sohrab Jafarpour, Hamed Farokhi, Mohammad Rahmati, Alireza Gholipour
Abstract:
In the current study, a nonlinear fluid-structure interaction (FSI) biomechanical model of atherosclerosis in the left anterior descending (LAD) coronary artery is developed to perform a detailed sensitivity analysis of the geometrical features of an atheroma plaque. In the development of the numerical model, first, a 3D geometry of the diseased artery is developed based on patient-specific dimensions obtained from the experimental studies. The geometry includes four influential geometric characteristics: stenosis ratio, plaque shoulder-length, fibrous cap thickness, and eccentricity intensity. Then, a suitable strain energy density function (SEDF) is proposed based on the detailed material stability analysis to accurately model the hyperelasticity of the arterial walls. The time-varying inlet velocity and outlet pressure profiles are adopted from experimental measurements to incorporate the pulsatile nature of the blood flow. In addition, a computationally efficient type of structural boundary condition is imposed on the arterial walls. Finally, a non-Newtonian viscosity model is implemented to model the shear-thinning behaviour of the blood flow. According to the results, the structural responses in terms of the maximum principal stress (MPS) are affected more compared to the fluid responses in terms of wall shear stress (WSS) as the geometrical characteristics are varying. The extent of these changes is critical in the vulnerability assessment of an atheroma plaque.Keywords: atherosclerosis, fluid-Structure interaction modeling, material stability analysis, and nonlinear biomechanics
Procedia PDF Downloads 881500 Fused Deposition Modelling as the Manufacturing Method of Fully Bio-Based Water Purification Filters
Authors: Natalia Fijol, Aji P. Mathew
Abstract:
We present the processing and characterisation of three-dimensional (3D) monolith filters based on polylactic acid (PLA) reinforced with various nature-derived nanospecies such as hydroxyapatite, modified cellulose fibers and chitin fibers. The nanospecies of choice were dispersed in PLA through Thermally Induced Phase Separation (TIPS) method. The biocomposites were developed via solvent-assisted blending and the obtained pellets were further single-screw extruded into 3D-printing filaments and processed into various geometries using Fused Deposition Modelling (FDM) technique. The printed prototypes included cubic, cylindrical and hour-glass shapes with diverse patterns of printing infill as well as varying pore structure including uniform and multiple level gradual pore structure. The pores and channel structure as well as overall shape of the prototypes were designed in attempt to optimize the flux and maximize the adsorption-active time. FDM is a cost and energy-efficient method, which does not require expensive tools and elaborated post-processing maintenance. Therefore, FDM offers the possibility to produce customized, highly functional water purification filters with tuned porous structures suitable for removal of wide range of common water pollutants. Moreover, as 3D printing becomes more and more available worldwide, it allows producing portable filters at the place and time where they are most needed. The study demonstrates preparation route for the PLA-based, fully biobased composite and their processing via FDM technique into water purification filters, addressing water treatment challenges on an industrial scale.Keywords: fused deposition modelling, water treatment, biomaterials, 3D printing, nanocellulose, nanochitin, polylactic acid
Procedia PDF Downloads 1151499 Effect of Austenitizing Temperature, Soaking Time and Grain Size on Charpy Impact Toughness of Quenched and Tempered Steel
Authors: S. Gupta, R. Sarkar, S. Pathak, D. H. Kela, A. Pramanick, P. Talukdar
Abstract:
Low alloy quenched and tempered steels are typically used in cast railway components such as knuckles, yokes, and couplers. Since these components experience extensive impact loading during their service life, adequate impact toughness of these grades need to be ensured to avoid catastrophic failure of parts in service. Because of the general availability of Charpy V Test equipment, Charpy test is the most common and economical means to evaluate the impact toughness of materials and is generally used in quality control applications. With this backdrop, an experiment was designed to evaluate the effect of austenitizing temperature, soaking time and resultant grain size on the Charpy impact toughness and the related fracture mechanisms in a quenched and tempered low alloy steel, with the aim of optimizing the heat treatment parameters (i.e. austenitizing temperature and soaking time) with respect to impact toughness. In the first phase, samples were austenitized at different temperatures viz. 760, 800, 840, 880, 920 and 960°C, followed by quenching and tempering at 600°C for 4 hours. In the next phase, samples were subjected to different soaking times (0, 2, 4 and 6 hours) at a fixed austenitizing temperature (980°C), followed by quenching and tempering at 600°C for 4 hours. The samples corresponding to different test conditions were then subjected to instrumented Charpy tests at -40°C and energy absorbed were recorded. Subsequently, microstructure and fracture surface of samples corresponding to different test conditions were observed under scanning electron microscope, and the corresponding grain sizes were measured. In the final stage, austenitizing temperature, soaking time and measured grain sizes were correlated with impact toughness and the fracture morphology and mechanism.Keywords: heat treatment, grain size, microstructure, retained austenite and impact toughness
Procedia PDF Downloads 3381498 The Way of Ultimate Realization Through the Buddha’s Realization
Authors: Sujan Barua
Abstract:
Buddhism relies upon natural events which are appeared from the four auto-elements of nature. It has seemed to be an authentic proof of mono-actions that have chronically been existing through our lives circles into the action and reaction that can produce more and more suffering in entire beings. Religion is called such politic through giving up on worthy concerns. Birth, aging, getting sick, lamentation, and death are just a politic of four conditions that depend upon one mind. Those are greed, hatred, and delusion, which are the first fueling to fall into a worthy realm again and again. It is because of having numerous ways of sense faculties, six senses, and five aggregates. These are all defaults of the deluded mind’s conditions and total ignorance covered by not understanding through the emancipating religion. Buddhism is dependent upon the threefold morality, which is the basic politic of giving up birth, aging, getting sick, lamentation, and death. Morality is the primordial theme of reach at ultimate happiness called “Nirvana”. It is a politic of one’s non-understanding ignorance. In Buddhism, the Buddha emphasizes that to understand the politic of the samsara, one must profoundly understand the own action that appears from the threefold ways. One must need authentically verify the own karma and reflection from the self-mind. The worthy concerns are the cause of political suffering to fall in samsara. By avoiding the entire, one can attain ultimate happiness. To attain Nirvana is not like an achievement of worthy happiness and proper understanding of functionality as we comfort in our daily lives. There is no virtue or non-virtual deeds to rebirth, no gripes, no upsetting, no greed, no hatred, no aging, no sickness, no death. It is totally uprooted from 31 types of states of worthy concerns. Nirvana is the stability of ultimate realization, but worthy states are the levels of grasping impurities in life span that make us fall into one clan according to our actions. By profoundly observing, the Buddha crucially founds that the source of rebirth is ignorance. Ignorance drives physical, verbal, and mental, which makes us reborn into the 31 types of realms in cycling existence. It is believed that the best knowledge of how many beings are in this world except the Enlightenment one. The enlightened one knows everything while he thinks about when it is causally necessary for demonstrating someone or verifying the truth of the relational way. It is a political view for entire beings that are chronic because covering by ignorance. It is tremendously toxic, and the person who truly understands this politic of turning here to there is a person who wishes to have eager to find the truth and way to leave those massive toxicities to discover the fixed state of nonexistence. The word non-existence is known as “Suiyata” or emptiness. One can able to find the ultimate truth with the effort of achieving the arch truth of leaving suffering from the cycling system.Keywords: ultimate realization, nirvana, the easiest way policy to give up worthily concerns, profound understanding of 31 types of cosmology, four noble truths
Procedia PDF Downloads 671497 Polystyrene Paste as a Substitute for a Portland Cement: A Solution to the Nigerian Dilemma
Authors: Lanre Oluwafemi Akinyemi
Abstract:
The reduction of limestone to cement in Nigeria is expensive and requires huge amounts of energy. This significantly affects the cost of cement. Concrete is heavy: a cubic foot of it weighs about 150 lbs. and a cubic yard is about 4000 lbs. Thus a ready-mix truck with 9 cubic yards is carrying 36,000 lbs excluding the weight of the truck itself, thereby accumulating cost for also manufacturers. Therein lies the need to find a substitute for cement by using the polystyrene paste that benefits both the manufactures and the consumers. Polystyrene Paste Constructional Cement (PPCC), a patented material obtained by dissolving Waste EPS in volatile organic solvent, has recently been identified as a suitable binder/cement for construction and building material production. This paper illustrates the procedures of a test experiment undertaken to determine the splitting tensile strength of PPCC mortar compared to that of OPC (Ordinary Portland Cement). Expanded polystyrene was dissolved in gasoline to form a paste referred to as Polystyrene Paste Constructional Cement (PPCC). Mortars of mix ratios 1:4, 1:5, 1:6, 1:7 (PPCC: fine aggregate) batched by volume were used to produce 50mm x 100mm cylindrical PPCC mortar splitting tensile strength specimens. The control experiment was done by creating another series of cylindrical OPC mortar splitting tensile strength specimens following the same mix ratio used earlier. The PPCC cylindrical splitting tensile strength specimens were left to air-set, and the ones made with Ordinary Portland Cement (OPC) were demoded after 24 hours and cured in water. The cylindrical PPCC splitting tensile strength specimens were tested at 28 days and compared with those of the Ordinary Portland cement splitting tensile strength specimens. The result shows that hence for this two mixes, PPCC exhibits a better binding property than the OPC. With this my new invention I recommend the use of PPCC as a substitute for a Portland cement.Keywords: polystyrene paste, Portland cement, construction, mortar
Procedia PDF Downloads 1571496 Human Dental Pulp Stem Cells Attenuate Streptozotocin-Induced Parotid Gland Injury in Rats
Authors: Gehan ElAkabawy
Abstract:
Background: Diabetes mellitus causes severe deteriorations of almost all the organs and systems of the body, as well as significant damage to the oral cavity. The oral changes are mainly related to salivary glands dysfunction characterized by hyposalivation and xerostomia, which significantly reduce diabetic patients’ quality of life. Human dental pulp stem cells represent a promising source for cell-based therapies, owing to their easy, minimally invasive surgical access, and high proliferative capacity. It was reported that the trophic support mediated by dental pulp stem cells can rescue the functional and structural alterations of damaged salivary glands. However, potential differentiation and paracrine effects of human dental pulp stem cells in diabetic-induced parotid gland damage have not been previously investigated. Our study aimed to investigate the therapeutic effects of intravenous transplantation of human dental pulp stem cells (hDPSCs) on parotid gland injury in a rat model of streptozotocin (STZ)-induced type 1 diabetes. Methods: Thirty Sprague-Dawley male rats were randomly categorised into three groups: control, diabetic (STZ), and transplanted (STZ+hDPSCs). hDPSCs or vehicle was injected into the tail vein 7 days after STZ injection. The fasting blood glucose levels were monitored weekly. A glucose tolerance test was performed, and the parotid gland weight, salivary flow rate, oxidative stress indices, parotid gland histology, and caspase-3, vascular endothelial growth factor (VEGF), and proliferating cell nuclear antigen (PCNA) expression in parotid tissues were assessed 28 days post-transplantation. Results: Transplantation of hDPSCs downregulated blood glucose, improved the salivary flow rate, and reduced oxidative stress. The cells migrated to, survived, and differentiated into acinar, ductal, and myoepithelial cells in the STZ-injured parotid gland. Moreover, they downregulated the expression of caspase-3 and upregulated the expression of VEGF and PCNA, likely exerting pro-angiogenetic and antiapoptotic effects and promoting endogenous regeneration. In addition, the transplanted cells enhanced the parotid nitric oxide (NO) -tetrahydrobiopterin (BH4) pathway. Conclusions: Our results show that hDPSCs can migrate to and survive within the STZ-injured parotid gland, where they prevent its functional and morphological damage by restoring normal glucose levels, differentiating into parotid cell populations, and stimulating paracrine-mediated regeneration. Thus, hDPSCs may have therapeutic potential in the treatment of diabetes-induced parotid gland injury.Keywords: dental pulp stem cells, diabetes, streptozotocin, parotid gland
Procedia PDF Downloads 1961495 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty
Authors: D. S. Gomes, A. T. Silva
Abstract:
Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation
Procedia PDF Downloads 2921494 When Your Change The Business Model ~ You Change The World
Authors: H. E. Amb. Terry Earthwind Nichols
Abstract:
Over the years Ambassador Nichols observed that successful companies all have one thing in common - belief in people. His observations of people in many companies, industries, and countries have also concluded one thing - groups of achievers far exceed the expectations and timelines of their superiors. His experience with achieving this has brought forth a model for the 21st century that will not only exceed expectations of companies, but it will also set visions for the future of business globally. It is time for real discussion around the future of work and the business model that will set the example for the world. Methodologies: In-person observations over 40 years – Ambassador Nichols present during the observations. Audio-visual observations – TV, Cinema, social media (YouTube, etc.), various news outlet Reading the autobiography of some of successful leaders over the last 75 years that lead their companies from a distinct perspective your people are your commodity. Major findings: People who believe in the leader’s vision for the company so much so that they remain excited about the future of the company and want to do anything in their power to ethically achieve that vision. People who are achieving regularly in groups, division, companies, etcetera: Live more healthfully lowering both sick time off and on-the-job accidents. Cannot wait to physically get to work as much as they can to feed off the high energy present in these companies. They are fully respected and supported resulting in near zero attrition. Simply put – they do not “Burn Out”. Conclusion: To the author’s best knowledge, 20th century practices in business are no longer valid and people are not going to work in those environments any longer. The average worker in the post-covid world is better educated than 50 years ago and most importantly, they have real-time information about any subject and can stream injustices as they happen. The Consortium Model is just the model for the evolution of both humankind and business in the 21st century.Keywords: business model, future of work, people, paradigm shift, business management
Procedia PDF Downloads 781493 Effects of Macro and Micro Nutrients on Growth and Yield Performances of Tomato (Lycopersicon esculentum MILL.)
Authors: K. M. S. Weerasinghe, A. H. K. Balasooriya, S. L. Ransingha, G. D. Krishantha, R. S. Brhakamanagae, L. C. Wijethilke
Abstract:
Tomato (Lycopersicon esculentum Mill.) is a major horticultural crop with an estimated global production of over 120 million metric tons and ranks first as a processing crop. The average tomato productivity in Sri Lanka (11 metric tons/ha) is much lower than the world average (24 metric tons/ha).To meet the tomato demand for the increasing population the productivity has to be intensified through the agronomic-techniques. Nutrition is one of the main factors which govern the growth and yield of tomato and the main nutrient source soil affect the plant growth and quality of the produce. Continuous cropping, improper fertilizer usage etc., cause widespread nutrient deficiencies. Therefore synthetic fertilizers and organic manures were introduced to enhance plant growth and maximize the crop yields. In this study, effects of macro and micronutrient supplementations on improvement of growth and yield of tomato were investigated. Selected tomato variety is Maheshi and plants were grown in Regional Agricultural and Research Centre Makadura under the Department of Agriculture recommended (DOA) macro nutrients and various combination of Ontario recommended dosages of secondary and micro fertilizer supplementations. There were six treatments in this experiment and each treatment was replicated in three times and each replicate consisted of six plants. Other than the DOA recommendation, five combinations of Ontario recommended dosage of secondary and micronutrients for tomato were also used as treatments. The treatments were arranged in a Randomized Complete Block Design. All cultural practices were carried out according to the DOA recommendations. The mean data was subjected to the statistical analysis using SAS package and mean separation (Duncan’s Multiple Range test at 5% probability level) procedures. Secondary and micronutrients containing treatments significantly increased most of the growth parameters. Plant height, plant girth, number of leaves, leaf area index etc. Fruits harvested from pots amended with macro, secondary and micronutrients performed best in terms of total yield; yield quality; to pots amended with DOA recommended dosage of fertilizer for tomato. It could be due to the application of all essential macro and micro nutrients that rise in photosynthetic activity, efficient translocation and utilization of photosynthates causing rapid cell elongation and cell division in actively growing region of the plant leading to stimulation of growth and yield were caused. The experiment revealed and highlighted the requirements of essential macro, secondary and micro nutrient fertilizer supplementations for tomato farming. The study indicated that, macro and micro nutrient supplementation practices can influence growth and yield performances of tomato fruits and it is a promising approach to get potential tomato yields.Keywords: macro and micronutrients, tomato, SAS package, photosynthates
Procedia PDF Downloads 4751492 Investigating Sustainable Construction and Demolition Waste Management Practices in South Africa
Authors: Ademilade J. Aboginije, Clinton O. Aigbavboa
Abstract:
South Africa is among the emerging economy, which has a policy and suitable environment that dynamically stimulates waste management practices of diverting waste away from landfill through prevention, reuse, recycling, and recovery known as the 4R-approaches. The focus of this paper is to investigate the existing structures and processes that are environmentally responsible, then determine the resource-efficiency of the waste management practices in the South Africa construction industry. This paper indicates the results of an investigation carried out by using a systematic review of several related literatures to assess the sustainability of waste management scenarios with secondary material recovery to pinpoint all influential criteria and consequently, highlights a step by step approach to adequately analyze the process by using the indicators that can clearly and fully value the waste management practices in South Africa. Furthermore, a life cycle Analytical tool is used to support the development of a framework which can be applied in measuring the sustainability of existing waste management practices in South Africa. Finding shows that sustainable C&D waste management practices stance a great prospect far more noticeable in terms of job creation and opportunities, saving cost and conserving natural resources when incorporated, especially in the process of recycling and reusing of C&D waste materials in several construction projects in South Africa. However, there are problems such as; inadequacy of waste to energy plants, low compliances to policies and sustainable principles, lack of enough technical capacities confronting the effectiveness of the current waste management practices. Thus, with the increase in the pursuit of sustainable development in most developing countries, this paper determines how sustainability can be measured and used in top-level decision-making policy within construction and demolition waste management for a sustainable built environment.Keywords: construction industry, green-star rating, life-cycle analysis, sustainability, zero-waste hierarchy
Procedia PDF Downloads 1281491 Numerical and Experimental Investigation of Distance Between Fan and Coil Block in a Fin and Tube Air Cooler Heat Exchanger
Authors: Feyza Şahi̇n, Harun Deni̇zli̇, Mustafa Zabun, Hüseyi̇n OnbaşIoğli
Abstract:
Heat exchangers are devices that are widely used to transfer heat between fluids due to their temperature differences. As a type of heat exchanger, air coolers are heat exchangers that cool the air as it passes through the fins of the heat exchanger by transferring heat to the refrigerant in the coil tubes of the heat exchanger. An assembled fin and tube heat exchanger consists of a coil block and a casing with a fan mounted on it. The term “Fan hood” is used to define the distance between the fan and the coil block. Air coolers play a crucial role in cooling systems, and their heat transfer performance can vary depending on design parameters. These parameters can be related to the air side or the internal fluid side. For airside efficiency, the distance between the fan and the coil block affects the performance by creating dead zones at the corners of the casing and maldistribution of airflow. Therefore, a detailed study of the effect of the fan hood on the evaporator and the optimum fan hood distance is necessary for an efficient air cooler design. This study aims to investigate the value of the fan hood in a fin and tube-type air cooler heat exchanger through computational fluid dynamics (CFD) simulations and experimental investigations. CFD simulations will be used to study the airflow within the fan hood. These simulations will provide valuable insights to optimize the design of the fan hood. In addition, experimental tests will be carried out to validate the CFD results and to measure the performance of the fan hood under real conditions. The results will help us to understand the effect of fan hood design on evaporator efficiency and contribute to the development of more efficient cooling systems. This study will provide essential information for evaporator design and improving the energy efficiency of cooling systems.Keywords: heat exchanger, fan hood, heat exchanger performance, air flow performance
Procedia PDF Downloads 771490 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 1071489 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life
Authors: Sandra Young
Abstract:
The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics
Procedia PDF Downloads 1371488 Tribological Properties of Non-Stick Coatings Used in Bread Baking Process
Authors: Maurice Brogly, Edwige Privas, Rajesh K. Gajendran, Sophie Bistac
Abstract:
Anti-sticky coatings based on perfluoroalkoxy (PFA) coatings are widely used in food processing industry especially for bread making. Their tribological performance, such as low friction coefficient, low surface energy and high heat resistance, make them an appropriate choice for anti-sticky coating application in moulds for food processing industry. This study is dedicated to evidence the transfer of contaminants from the coating due to wear and thermal ageing of the mould. The risk of contamination is induced by the damage of the coating by bread crust during the demoulding stage. The study focuses on the wear resistance and potential transfer of perfluorinated polymer from the anti-sticky coating. Friction between perfluorinated coating and bread crust is modeled by a tribological pin-on-disc test. The cellular nature of the bread crust is modeled by a polymer foam. FTIR analysis of the polymer foam after friction allow the evaluation of the transfer from the perfluorinated coating to polymer foam. Influence of thermal ageing on the physical, chemical and wear properties of the coating are also investigated. FTIR spectroscopic results show that the increase of PFA transfer onto the foam counterface is associated to the decrease of the friction coefficient. Increasing lubrication by film transfer results in the decrease of the friction coefficient. Moreover increasing the friction test parameters conditions (load, speed and sliding distance) also increase the film transfer onto the counterface. Thermal ageing increases the hydrophobic character of the PFA coating and thus also decreases the friction coefficient.Keywords: fluorobased polymer coatings, FTIR spectroscopy, non-stick food moulds, wear and friction
Procedia PDF Downloads 3311487 Design Components and Reliability Aspects of Municipal Waste Water and SEIG Based Micro Hydro Power Plant
Authors: R. K. Saket
Abstract:
This paper presents design aspects and probabilistic approach for generation reliability evaluation of an alternative resource: municipal waste water based micro hydro power generation system. Annual and daily flow duration curves have been obtained for design, installation, development, scientific analysis and reliability evaluation of the MHPP. The hydro potential of the waste water flowing through sewage system of the BHU campus has been determined to produce annual flow duration and daily flow duration curves by ordering the recorded water flows from maximum to minimum values. Design pressure, the roughness of the pipe’s interior surface, method of joining, weight, ease of installation, accessibility to the sewage system, design life, maintenance, weather conditions, availability of material, related cost and likelihood of structural damage have been considered for design of a particular penstock for reliable operation of the MHPP. A MHPGS based on MWW and SEIG is designed, developed, and practically implemented to provide reliable electric energy to suitable load in the campus of the Banaras Hindu University, Varanasi, (UP), India. Generation reliability evaluation of the developed MHPP using Gaussian distribution approach, safety factor concept, peak load consideration and Simpson 1/3rd rule has presented in this paper.Keywords: self excited induction generator, annual and daily flow duration curve, sewage system, municipal waste water, reliability evaluation, Gaussian distribution, Simpson 1/3rd rule
Procedia PDF Downloads 5581486 Development of Highly Repellent Silica Nanoparticles Treatment for Protection of Bio-Based Insulation Composite Material
Authors: Nadia Sid, Alan Taylor, Marion Bourebrab
Abstract:
The construction sector is on the critical path to decarbonise the European economy by 2050. In order to achieve this objective it must enable reducing its CO2 emission by 90% and its energy consumption by as much as 50%. For this reason, a new class of low environmental impact construction materials named “eco-material” are becoming increasingly important in the struggle against climate change. A European funded collaborative project ISOBIO coordinated by TWI is aimed at taking a radical approach to the use of bio-based aggregates to create novel construction materials that are usable in high volume in using traditional methods, as well as developing markets such as exterior insulation of existing house stocks. The approach taken for this project is to use finely chopped material protected from bio-degradation through the use of functionalized silica nanoparticles. TWI is exploring the development of novel inorganic-organic hybrid nano-materials, to be applied as a surface treatment onto bio-based aggregates. These nanoparticles are synthesized by sol-gel processing and then functionalised with silanes to impart multifunctionality e.g. hydrophobicity, fire resistance and chemical bonding between the silica nanoparticles and the bio-based aggregates. This talk will illustrate the approach taken by TWI to design the functionalized silica nanoparticles by using a material-by-design approach. The formulation and synthesize process will be presented together with the challenges addressed by those hybrid nano-materials. The results obtained with regards to the water repellence and fire resistance will be displayed together with preliminary public results of the ISOBIO project. (This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 641927).Keywords: bio-sourced material, composite material, durable insulation panel, water repellent material
Procedia PDF Downloads 2371485 An Anode Based on Modified Silicon Nanostructured for Lithium – Ion Battery Application
Authors: C. Yaddaden, M. Berouaken, L. Talbi, K. Ayouz, M. Ayat, A. Cheriet, F. Boudeffar, A. Manseri, N. Gabouze
Abstract:
Lithium-ion batteries (LIBs) are widely used in various electronic devices due to their high energy density. However, the performance of the anode material in LIBs is crucial for enhancing the battery's overall efficiency. This research focuses on developing a new anode material by modifying silicon nanostructures, specifically porous silicon nanowires (PSiNWs) and porous silicon nanoparticles (NPSiP), with silver nanoparticles (Ag) to improve the performance of LIBs. The aim of this research is to investigate the potential application of PSiNWs/Ag and NPSiP/Ag as anodes in LIBs and evaluate their performance in terms of specific capacity and Coulombic efficiency. The research methodology involves the preparation of PSiNWs and NPSiP using metal-assisted chemical etching and electrochemical etching techniques, respectively. The Ag nanoparticles are introduced onto the nanostructures through electrodissolution of the porous film and ultrasonic treatment. Galvanostatic charge/discharge measurements are conducted between 1 and 0.01 V to evaluate the specific capacity and Coulombic efficiency of both PSiNWs/Ag and NPSiP/Ag electrodes. The specific capacity of the PSiNWs/Ag electrode is approximately 1800 mA h g-1, with a Coulombic efficiency of 98.8% at the first charge/discharge cycle. On the other hand, the NPSiP/Ag electrode exhibits a specific capacity of 2600 mAh g-1. Both electrodes show a slight increase in capacity retention after 80 cycles, attributed to the high porosity and surface area of the nanostructures and the stabilization of the solid electrolyte interphase (SEI). This research highlights the potential of using modified silicon nanostructures as anodes for LIBs, which can pave the way for the development of more efficient lithium-ion batteries.Keywords: porous silicon nanowires, silicon nanoparticles, lithium-ion batteries, galvanostatic charge/discharge
Procedia PDF Downloads 631484 Field Theories in Chiral Liquid Crystals: A Theory for Helicoids and Skyrmions
Authors: G. De Matteis, L. Martina, V. Turco
Abstract:
The work is focused on determining and comparing special nonlinear static configurations in cholesteric liquid crystals (CLCs), confined between two parallel plates and in the presence of an external static electric/magnetic field. The solutions are stabilised by topological and non-topological conservation laws since they are described in terms of integrable or partially integrable nonlinear boundary value problems. In cholesteric liquid crystals which are subject to geometric frustration; anchoring conditions at boundaries, i.e., homeotropic conditions, are incompatible with the cholesteric twist. This aspect turns out to be essential in the admissible classes of solutions, allowing also for disclination type singularities. Within the framework of Frank-Oseen theory, we study the static configurations for CLCs. First, we find numerical solutions for isolated axisymmetric states in confined CLCs with weak homeotropic anchoring at the boundaries. These solutions describe 3-dimensional modulations, namely spherulites or cholesteric bubbles, actually observed in these systems, of standard baby skyrmions. Relations with well-known nonlinear integrable systems are found and are used to explore the asymptotic behavior of the solutions. Then we turn our attention to extended periodic static configurations called Helicoids or cholesteric fingers, described by an elliptic sine-Gordon model with appropriate boundary conditions, showing how their period and energies are determined by both the thickness of the cell and the intensity of the external electric/magnetic field. We explicitly show that helicoids with π or 2π of rotations of the molecular director are different in many aspects and are not simply algebraically related. The behaviour of the solutions, their energy and the properties of the associated disclinations are discussed in detail, both analytically and numerically.Keywords: cholesteric liquid crystals, geometric frustration, helicoids, skyrmions
Procedia PDF Downloads 1291483 The Impact of the Covid-19 Crisis on the Information Behavior in the B2B Buying Process
Authors: Stehr Melanie
Abstract:
The availability of apposite information is essential for the decision-making process of organizational buyers. Due to the constraints of the Covid-19 crisis, information channels that emphasize face-to-face contact (e.g. sales visits, trade shows) have been unavailable, and usage of digitally-driven information channels (e.g. videoconferencing, platforms) has skyrocketed. This paper explores the question in which areas the pandemic induced shift in the use of information channels could be sustainable and in which areas it is a temporary phenomenon. While information and buying behavior in B2C purchases has been regularly studied in the last decade, the last fundamental model of organizational buying behavior in B2B was introduced by Johnston and Lewin (1996) in times before the advent of the internet. Subsequently, research efforts in B2B marketing shifted from organizational buyers and their decision and information behavior to the business relationships between sellers and buyers. This study builds on the extensive literature on situational factors influencing organizational buying and information behavior and uses the economics of information theory as a theoretical framework. The research focuses on the German woodworking industry, which before the Covid-19 crisis was characterized by a rather low level of digitization of information channels. By focusing on an industry with traditional communication structures, a shift in information behavior induced by an exogenous shock is considered a ripe research setting. The study is exploratory in nature. The primary data source is 40 in-depth interviews based on the repertory-grid method. Thus, 120 typical buying situations in the woodworking industry and the information and channels relevant to them are identified. The results are combined into clusters, each of which shows similar information behavior in the procurement process. In the next step, the clusters are analyzed in terms of the post and pre-Covid-19 crisis’ behavior identifying stable and dynamic information behavior aspects. Initial results show that, for example, clusters representing search goods with low risk and complexity suggest a sustainable rise in the use of digitally-driven information channels. However, in clusters containing trust goods with high significance and novelty, an increased return to face-to-face information channels can be expected after the Covid-19 crisis. The results are interesting from both a scientific and a practical point of view. This study is one of the first to apply the economics of information theory to organizational buyers and their decision and information behavior in the digital information age. Especially the focus on the dynamic aspects of information behavior after an exogenous shock might contribute new impulses to theoretical debates related to the economics of information theory. For practitioners - especially suppliers’ marketing managers and intermediaries such as publishers or trade show organizers from the woodworking industry - the study shows wide-ranging starting points for a future-oriented segmentation of their marketing program by highlighting the dynamic and stable preferences of elaborated clusters in the choice of their information channels.Keywords: B2B buying process, crisis, economics of information theory, information channel
Procedia PDF Downloads 1841482 Modelling the Effect of Biomass Appropriation for Human Use on Global Biodiversity
Authors: Karina Reiter, Stefan Dullinger, Christoph Plutzar, Dietmar Moser
Abstract:
Due to population growth and changing patterns of production and consumption, the demand for natural resources and, as a result, the pressure on Earth’s ecosystems are growing. Biodiversity mapping can be a useful tool for assessing species endangerment or detecting hotspots of extinction risks. This paper explores the benefits of using the change in trophic energy flows as a consequence of the human alteration of the biosphere in biodiversity mapping. To this end, multiple linear regression models were developed to explain species richness in areas where there is no human influence (i.e. wilderness) for three taxonomic groups (birds, mammals, amphibians). The models were then applied to predict (I) potential global species richness using potential natural vegetation (NPPpot) and (II) global ‘actual’ species richness after biomass appropriation using NPP remaining in ecosystems after harvest (NPPeco). By calculating the difference between predicted potential and predicted actual species numbers, maps of estimated species richness loss were generated. Results show that biomass appropriation for human use can indeed be linked to biodiversity loss. Areas for which the models predicted high species loss coincide with areas where species endangerment and extinctions are recorded to be particularly high by the International Union for Conservation of Nature and Natural Resources (IUCN). Furthermore, the analysis revealed that while the species distribution maps of the IUCN Red List of Threatened Species used for this research can determine hotspots of biodiversity loss in large parts of the world, the classification system for threatened and extinct species needs to be revised to better reflect local risks of extinction.Keywords: biodiversity loss, biomass harvest, human appropriation of net primary production, species richness
Procedia PDF Downloads 1301481 Analysis of Grid Connected High Concentrated Photovoltaic Systems for Peak Load Shaving in Kuwait
Authors: Adel A. Ghoneim
Abstract:
Air conditioning devices are substantially utilized in the summer months, as a result maximum loads in Kuwait take place in these intervals. Peak energy consumption are usually more expensive to satisfy compared to other standard power sources. The primary objective of the current work is to enhance the performance of high concentrated photovoltaic (HCPV) systems in an attempt to minimize peak power usage in Kuwait using HCPV modules. High concentrated PV multi-junction solar cells provide a promising method towards accomplishing lowest pricing per kilowatt-hour. Nevertheless, these cells have various features that should be resolved to be feasible for extensive power production. A single diode equivalent circuit model is formulated to analyze multi-junction solar cells efficiency in Kuwait weather circumstances taking into account the effects of both the temperature and the concentration ratio. The diode shunt resistance that is commonly ignored in the established models is considered in the present numerical model. The current model results are successfully validated versus measurements from published data to within 1.8% accuracy. Present calculations reveal that the single diode model considering the shunt resistance provides accurate and dependable results. The electrical efficiency (η) is observed to increase with concentration to a specific concentration level after which it reduces. Implementing grid systems is noticed to increase with concentration to a certain concentration degree after which it decreases. Employing grid connected HCPV systems results in significant peak load reduction.Keywords: grid connected, high concentrated photovoltaic systems, peak load, solar cells
Procedia PDF Downloads 1551480 Elastic Collisions of Electrons with DNA and Water From 10 eV to 100 KeV: Scar Macro Investigation
Authors: Aouina Nabila Yasmina, Zine El Abidine Chaoui
Abstract:
Recently, understanding the interactions of electrons with the DNA molecule and its components has attracted considerable interest because DNA is the main site damaged by ionizing radiation. The interactions of radiation with DNA induce a variety of molecular damage such as single-strand breaks, double-strand breaks, basic damage, cross-links between proteins and DNA, and others, or the formation of free radicals, which, by chemical reactions with DNA, can also lead to breakage of the strand. One factor that can contribute significantly to these processes is the effect of water hydration on the formation and reaction of radiation induced by these radicals in and / or around DNA. B-DNA requires about 30% by weight of water to maintain its native conformation in the crystalline state. The transformation depends on various factors such as sequence, ion composition, concentration and water activity. Partial dehydration converts it to DNA-A. The present study shows the results of theoretical calculations for positrons and electrons elastic scattering with DNA medium and water over a broad energy range from 10 eV to 100 keV. Indeed, electron elastic cross sections and elastic mean free paths are calculated using a corrected form of the independent atom method, taking into account the geometry of the biomolecule (SCAR macro). Moreover, the elastic scattering of electrons and positrons by atoms of the biomolecule was evaluated by means of relativistic (Dirac) partial wave analysis. Our calculated results are compared with theoretical data available in the literature in the absence of experimental data, in particular for positron. As a central result, our electron elastic cross sections are in good agreement with existing theoretical data in the range of 10 eV to 1 keV.Keywords: elastic cross scrion, elastic mean free path, scar macro method, electron collision
Procedia PDF Downloads 651479 An Investigation on the Effect of Railway Track Elevation Project in Taichung Based on the Carbon Emissions
Authors: Kuo-Wei Hsu, Jen-Chih, Chao, Pei-Chen, Wu
Abstract:
With the rapid development of global economy, the increasing population, the highly industrialization, greenhouse gas emission and the ozone layer damage, the Global Warming happens. Facing the impact of global warming, the issue of “green transportation” began to be valued and promoted in each city. Taichung has been elected as the model of low-carbon city in Taiwan. To comply with international trends and the government policy, we tried to promote the energy saving and carbon reduction to create a “low-carbon Taichung with green life and eco-friendly economy”. To cooperate with the “green transportation” project, Taichung has promoted a number of public transports constructions and traffic policy in recent years like BRT, MRT, etc. The elevated railway is one of those important constructions. Cooperating with the green transport policy, elevated railway could help to achieve the carbon reduction for this low-carbon city. The current studies of the carbon emissions associated with railways and roads are focusing on the assessment on paving material, institutional policy and economic benefit. Except for changing the mode of transportation, elevated railways/roads also create space under the bridge. However, there is no research about the carbon emissions of the space underneath the elevated section up until now. This study investigated the effect of railway track elevation project in Taichung based on the carbon emissions and the factors that affect carbon emissions by research related theory and literature analysis. This study concluded that : railway track elevation increased the public transit, the bike lanes, the green areas and walking spaces. In the other hand it reduced the traffic congestions, the use of motorcycles as well as automobiles for carbon emissions.Keywords: low-carbon city, green transportation, carbon emissions, Taichung, Taiwan
Procedia PDF Downloads 5351478 Exposure of Pacu, Piaractus mesopotamicus Gill Tissue to a High Stocking Density: An Ion Regulatory and Microscopy Study
Authors: Wiolene Montanari Nordi, Debora Botequio Moretti, Mariana Caroline Pontin, Jessica Pampolini, Raul Machado-Neto
Abstract:
Gills are organs responsible for respiration and osmoregulation between the fish internal environment and water. Under stress conditions, oxidative response and gill plasticity to attempt to increase gas exchange area are noteworthy, compromising the physiological processes and therefore fish health. Colostrum is a dietary source of nutrients, immunoglobulin, antioxidant and bioactive molecules, essential for immunological protection and development of the gastrointestinal epithelium. The hypothesis of this work is that antioxidant factors present in the colostrum, unprecedentedly tested in gills, can minimize or reduce the alteration of its epithelium structure of juvenile pacu (Piaractus mesopotamicus) subjected to high stocking density. The histological changes in the gills architecture were characterized by the frequency, incidence and severity of the tissue alteration and ionic status. Juvenile (50 kg fish/m3) were fed with pelleted diets containing 0, 10, 20 or 30% of lyophilized bovine colostrum (LBC) inclusion and at 30 experimental days, gill and blood samples were collected in eight fish per treatment. The study revealed differences in the type, frequency and severity (histological alterations index – HAI) of tissue alterations among the treatments, however, no distinct differences in the incidence of alteration (mean alteration value – MAV) were observed. The main histological changes in gill were elevation of the lamellar epithelium, excessive cell proliferation of the filament and lamellar epithelium causing total or partial melting of the lamella, hyperplasia and hypertrophy of lamellar and filament epithelium, uncontrolled thickening of filament and lamellar tissues, mucous and chloride cells presence in the lamella, aneurysms, vascular congestion and presence of parasites. The MAV obtained per treatment were 2.0, 2.5, 1.8 and 2.5 to fish fed diets containing 0, 10, 20 and 30% of LBC inclusion, respectively, classifying the incidence of gill alterations as slightly to moderate. The severity of alteration of individual fish of treatment 0, 10 and 20% LBC ranged values from 5 to 40 (HAI average of 20.1, 17.5 and 17.6, respectively, P > 0.05), and differs from 30% LBC, that ranged from 6 to 129 (HAI mean of 77.2, P < 0.05). The HAI value in the treatments 0, 10 and 20% LBC reveals gill tissue with injuries classified from slightly to moderate, while in 30% LBC moderate to severe, consequence of the onset of necrosis in the tissue of two fish that compromises the normal functioning of the organ. In relation to frequency of gill alterations, evaluated according to absence of alterations (0) to highly frequent (+++), histological alterations were observed in all evaluated fish, with a trend of higher frequency in 0% LBC. The concentration of Na+, Cl-, K+ and Ca2+ did not changed in all treatments (P > 0.05), indicating similar capacity of ion exchange. The concentrations of bovine colostrum used in diets of present study did not impair the alterations observed in the gills of juvenile pacu.Keywords: histological alterations of gill tissue, ionic status, lyophilized bovine colostrum, optical microscopy
Procedia PDF Downloads 2991477 Biophysical Study of the Interaction of Harmalol with Nucleic Acids of Different Motifs: Spectroscopic and Calorimetric Approaches
Authors: Kakali Bhadra
Abstract:
Binding of small molecules to DNA and recently to RNA, continues to attract considerable attention for developing effective therapeutic agents for control of gene expression. This work focuses towards understanding interaction of harmalol, a dihydro beta-carboline alkaloid, with different nucleic acid motifs viz. double stranded CT DNA, single stranded A-form poly(A), double-stranded A-form of poly(C)·poly(G) and clover leaf tRNAphe by different spectroscopic, calorimetric and molecular modeling techniques. Results of this study converge to suggest that (i) binding constant varied in the order of CT DNA > poly(C)·poly(G) > tRNAphe > poly(A), (ii) non-cooperative binding of harmalol to poly(C)·poly(G) and poly(A) and cooperative binding with CT DNA and tRNAphe, (iii) significant structural changes of CT DNA, poly(C)·poly(G) and tRNAphe with concomitant induction of optical activity in the bound achiral alkaloid molecules, while with poly(A) no intrinsic CD perturbation was observed, (iv) the binding was predominantly exothermic, enthalpy driven, entropy favoured with CT DNA and poly(C)·poly(G) while it was entropy driven with tRNAphe and poly(A), (v) a hydrophobic contribution and comparatively large role of non-polyelectrolytic forces to Gibbs energy changes with CT DNA, poly(C)·poly(G) and tRNAphe, and (vi) intercalated state of harmalol with CT DNA and poly(C)·poly(G) structure as revealed from molecular docking and supported by the viscometric data. Furthermore, with competition dialysis assay it was shown that harmalol prefers hetero GC sequences. All these findings unequivocally pointed out that harmalol prefers binding with ds CT DNA followed by ds poly(C)·poly(G), clover leaf tRNAphe and least with ss poly(A). The results highlight the importance of structural elements in these natural beta-carboline alkaloids in stabilizing different DNA and RNA of various motifs for developing nucleic acid based better therapeutic agents.Keywords: calorimetry, docking, DNA/RNA-alkaloid interaction, harmalol, spectroscopy
Procedia PDF Downloads 2281476 Fluid Structure Interaction Study between Ahead and Angled Impact of AGM 88 Missile Entering Relatively High Viscous Fluid for K-Omega Turbulence Model
Authors: Abu Afree Andalib, Rafiur Rahman, Md Mezbah Uddin
Abstract:
The main objective of this work is to anatomize on the various parameters of AGM 88 missile anatomized using FSI module in Ansys. Computational fluid dynamics is used for the study of fluid flow pattern and fluidic phenomenon such as drag, pressure force, energy dissipation and shockwave distribution in water. Using finite element analysis module of Ansys, structural parameters such as stress and stress density, localization point, deflection, force propagation is determined. Separate analysis on structural parameters is done on Abacus. State of the art coupling module is used for FSI analysis. Fine mesh is considered in every case for better result during simulation according to computational machine power. The result of the above-mentioned parameters is analyzed and compared for two phases using graphical representation. The result of Ansys and Abaqus are also showed. Computational Fluid Dynamics and Finite Element analyses and subsequently the Fluid-Structure Interaction (FSI) technique is being considered. Finite volume method and finite element method are being considered for modelling fluid flow and structural parameters analysis. Feasible boundary conditions are also utilized in the research. Significant change in the interaction and interference pattern while the impact was found. Theoretically as well as according to simulation angled condition was found with higher impact.Keywords: FSI (Fluid Surface Interaction), impact, missile, high viscous fluid, CFD (Computational Fluid Dynamics), FEM (Finite Element Analysis), FVM (Finite Volume Method), fluid flow, fluid pattern, structural analysis, AGM-88, Ansys, Abaqus, meshing, k-omega, turbulence model
Procedia PDF Downloads 4671475 The Duty of Sea Carrier to Transship the Cargo in Case of Vessel Breakdown
Authors: Mojtaba Eshraghi Arani
Abstract:
Concluding the contract for carriage of cargo with the shipper (through bill of lading or charterparty), the carrier must transport the cargo from loading port to the port of discharge and deliver it to the consignee. Unless otherwise agreed in the contract, the carrier must avoid from any deviation, transfer of cargo to another vessel or unreasonable stoppage of carriage in-transit. However, the vessel might break down in-transit for any reason and becomes unable to continue its voyage to the port of discharge. This is a frequent incident in the carriage of goods by sea which leads to important dispute between the carrier/owner and the shipper/charterer (hereinafter called “cargo interests”). It is a generally accepted rule that in such event, the carrier/owner must repair the vessel after which it will continue its voyage to the destination port. The dispute will arise in the case that temporary repair of the vessel cannot be done in the short or reasonable term. There are two options for the contract parties in such a case: First, the carrier/owner is entitled to repair the vessel while having the cargo onboard or discharged in the port of refugee, and the cargo interests must wait till the breakdown is rectified at any time, whenever. Second, the carrier/owner will be responsible to charter another vessel and transfer the entirety of cargo to the substitute vessel. In fact, the main question revolves around the duty of carrier/owner to perform transfer of cargo to another vessel. Such operation which is called “trans-shipment” or “transhipment” (in terms of the oil industry it is usually called “ship-to-ship” or “STS”) needs to be done carefully and with due diligence. In fact, the transshipment operation for various cargoes might be different as each cargo requires its own suitable equipment for transfer to another vessel, so this operation is often costly. Moreover, there is a considerable risk of collision between two vessels in particular in bulk carriers. Bulk cargo is also exposed to the shortage and partial loss in the process of transshipment especially during bad weather. Concerning tankers which carry oil and petrochemical products, transshipment, is most probably followed by sea pollution. On the grounds of the above consequences, the owners are afraid of being held responsible for such operation and are reluctant to perform in the relevant disputes. The main argument raised by them is that no regulation has recognized such duty upon their shoulders so any such operation must be done under the auspices of the cargo interests and all costs must be reimbursed by themselves. Unfortunately, not only the international conventions including Hague rules, Hague-Visby Rules, Hamburg rules and Rotterdam rules but also most domestic laws are silent in this regard. The doctrine has yet to analyse the issue and no legal researches was found out in this regard. A qualitative method with the concept of interpretation of data collection has been used in this paper. The source of the data is the analysis of regulations and cases. It is argued in this article that the paramount rule in the maritime law is “the accomplishment of the voyage” by the carrier/owner in view of which, if the voyage can only be finished by transshipment, then the carrier/owner will be responsible to carry out this operation. The duty of carrier/owner to apply “due diligence” will strengthen this reasoning. Any and all costs and expenses will also be on the account pf the owner/carrier, unless the incident is attributable to any cause arising from the cargo interests’ negligence.Keywords: cargo, STS, transshipment, vessel, voyage
Procedia PDF Downloads 1191474 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 330