Search results for: interpretive structural modeling
492 Morphological Transformation of Traditional Cities: The Case Study of the Historic Center of the City of Najaf
Authors: Sabeeh Lafta Farhan, Ihsan Abbass Jasim, Sohaib Kareem Al-Mamoori
Abstract:
This study addresses the subject of transformation of urban structures and how does this transformation affect the character of traditional cities, which represents the research issue. Hence, the research has aimed at studying and learning about the urban structure characteristics and morphological transformation features in the traditional cities centers, and to look for means and methods to preserve the character of those cities. Cities are not merely locations inhabited by a large number of people, they are political and legal entities, in addition to economic activities that distinguish these cities, thus, they are a complex set of institutions, and the transformation in urban environment cannot be recognized without understanding these relationships. The research presumes an existing impact of urbanization on the properties of traditional structure of the Holy City of Najaf. The research has defined urbanization as restructuring and re-planning of urban areas that have lost their functions and bringing them into social and cultural life in the city, to be able to serve economy in order to better respond to the needs of users. Sacred Cities provide the organic connection between acts of worship and dealings and reveal the mechanisms and reasons behind the regulatory nature of the sacred shrine and their role in achieving organizational assimilation of urban morphology. The research has reached a theoretical framework of the particulars of urbanization. This framework has been applied to the historic center of the old city of Najaf, where the most important findings of the research were that the visual and structural dominant presence of holy shrine of Imam Ali (peace be upon him) remains to emphasize the visual particularity, and the main role of the city, which hosts one of the most important Muslim shrines in the world, in addition to the visible golden dome rising above the skyline, and the Imam Ali Mosque the hub and the center for religious activities. Thus, in view of being a place of main importance and a symbol of religious and Islamic culture, it is very important to have the shrine of Imam Ali (AS) prevailing on all zones of re-development in the old city. Consequently, the research underlined that the distinctive and unique character of the city of Najaf did not proceed from nothing, but was achieved through the unrivaled characteristics and features possessed by the city of Najaf alone, which allowed it and enabled it to occupy this status among the Arab and Muslim cities. That is why the activities arising from the development have to enhance the historical role of the city in order to have this development as clear support, strength and further addition to the city assets and its cultural heritage, and not seeing the developmental activities crushing the city urban traditional fabric, cultural heritage and its historical specificity.Keywords: Iraq, the city of Najaf, heritage, traditional cities, morphological transformation
Procedia PDF Downloads 314491 The Unique Electrical and Magnetic Properties of Thorium Di-Iodide Indicate the Arrival of Its Superconducting State
Authors: Dong Zhao
Abstract:
Even though the recent claim of room temperature superconductivity by LK-99 was confirmed an unsuccessful attempt, this work reawakened people’s century striving to get applicable superconductors with Tc of room temperature or higher and under ambient pressure. One of the efforts was focusing on exploring the thorium salts. This is because certain thorium compounds revealed an unusual property of having both high electrical conductivity and diamagnetism or the so-called “coexistence of high electrical conductivity and diamagnetism.” It is well known that this property of the coexistence of high electrical conductivity and diamagnetism is held by superconductors because of the electron pairings. Consequently, the likelihood for these thorium compounds to have superconducting properties becomes great. However, as a surprise, these thorium salts possess this property at room temperature and atmosphere pressure. This gives rise to solid evidence for these thorium compounds to be room-temperature superconductors without a need for external pressure. Among these thorium compound superconductors claimed in that work, thorium di-iodide (ThI₂) is a unique one and has received comprehensive discussion. ThI₂ was synthesized and structurally analyzed by the single crystal diffraction method in the 1960s. Its special property of coexistence of high electrical conductivity and diamagnetism was revealed. Because of this unique property, a special molecular configuration was sketched. Except for an ordinary oxidation of +2 for the thorium cation, the thorium’s oxidation state in ThI₂ is +4. According to the experimental results, ThI₂‘s actual molecular configuration was determined as an unusual one of [Th4+(e-)2](I-)2. This means that the ThI₂ salt’s cation is composed of a [Th4+(e-)2]2+ cation core. In other words, the cation of ThI₂ is constructed by combining an oxidation state +4 of the thorium atom and a pair of electrons or an electron lone pair located on the thorium atom. This combination of the thorium atom and the electron lone pair leads to an oxidation state +2 for the [Th4+(e-)2]2+ cation core. This special construction of the thorium cation is very distinctive, which is believed to be the factor that grants ThI₂ the room temperature superconductivity. Actually, the key for ThI₂ to become a room-temperature superconductor is this characteristic electron lone pair residing on the thorium atom along with the formation of a network constructed by the thorium atoms. This network specializes in a way that allows the electron lone pairs to hop over it and, thus, to generate the supercurrent. This work will discuss, in detail, the special electrical and magnetic properties of ThI₂ as well as its structural features at ambient conditions. The exploration of how the electron pairing in combination with the structurally specialized network works together to bring ThI₂ into a superconducting state. From the experimental results, strong evidence has definitely pointed out that the ThI₂ should be a superconductor, at least at room temperature and under atmosphere pressure.Keywords: co-existence of high electrical conductivity and diamagnetism, electron lone pair, room temperature superconductor, special molecular configuration of thorium di-iodide ThI₂
Procedia PDF Downloads 58490 Characterization of Himalayan Phyllite with Reference to Foliation Planes
Authors: Divyanshoo Singh, Hemant Kumar Singh, Kumar Nilankar
Abstract:
Major engineering constructions and foundations (e.g., dams, tunnels, bridges, underground caverns, etc.) in and around the Himalayan region of Uttarakhand are not only confined within hard and crystalline rocks but also stretched within weak and anisotropic rocks. While constructing within such anisotropic rocks, engineers more often encounter geotechnical complications such as structural instability, slope failure, and excessive deformation. These severities/complexities arise mainly due to inherent anisotropy such as layering/foliations, preferred mineral orientations, and geo-mechanical anisotropy present within rocks and vary when measured in different directions. Of all the inherent anisotropy present within the rocks, major geotechnical complexities mainly arise due to the inappropriate orientation of weak planes (bedding/foliation). Thus, Orientations of such weak planes highly affect the fracture patterns, failure mechanism, and strength of rocks. This has led to an improved understanding of the physico-mechanical behavior of anisotropic rocks with different orientations of weak planes. Therefore, in this study, block samples of phyllite belonging to the Chandpur Group of Lesser Himalaya were collected from the Srinagar area of Uttarakhand, India, to investigate the effect of foliation angles on physico-mechanical properties of the rock. Further, collected block samples were core drilled of diameter 50 mm at different foliation angles, β (angle between foliation plane and drilling direction), i.e., 0⁰, 30⁰, 60⁰, and 90⁰, respectively. Before the test, drilled core samples were oven-dried at 110⁰C to achieve uniformity. Physical and mechanical properties such as Seismic wave velocity, density, uniaxial compressive strength (UCS), point load strength (PLS), and Brazilian tensile strength (BTS) test were carried out on prepared core specimens. The results indicate that seismic wave velocities (P-wave and S-wave) decrease with increasing β angle. As the β angle increases, the number of foliation planes that the wave needs to pass through increases and thus causes the dissipation of wave energy with increasing β. Maximum strength for UCS, PLS, and BTS was found to be at β angle of 90⁰. However, minimum strength for UCS and BTS was found to be at β angle of 30⁰, which differs from PLS, where minimum strength was found at 0⁰ β angle. Furthermore, failure modes also correspond to the strength of the rock, showing along foliation and non-central failure as characteristics of low strength values, while multiple fractures and central failure as characteristics of high strength values. Thus, this study will provide a better understanding of the anisotropic features of phyllite for the purpose of major engineering construction and foundations within the Himalayan Region.Keywords: anisotropic rocks, foliation angle, Physico-mechanical properties, phyllite, Himalayan region
Procedia PDF Downloads 59489 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 513488 Modeling of Tsunami Propagation and Impact on West Vancouver Island, Canada
Authors: S. Chowdhury, A. Corlett
Abstract:
Large tsunamis strike the British Columbia coast every few hundred years. The Cascadia Subduction Zone, which extends along the Pacific coast from Vancouver Island to Northern California is one of the most seismically active regions in Canada. Significant earthquakes have occurred in this region, including the 1700 Cascade Earthquake with an estimated magnitude of 9.2. Based on geological records, experts have predicted a 'great earthquake' of a similar magnitude within this region may happen any time. This earthquake is expected to generate a large tsunami that could impact the coastal communities on Vancouver Island. Since many of these communities are in remote locations, they are more likely to be vulnerable, as the post-earthquake relief efforts would be impacted by the damage to critical road infrastructures. To assess the coastal vulnerability within these communities, a hydrodynamic model has been developed using MIKE-21 software. We have considered a 500 year probabilistic earthquake design criteria including the subsidence in this model. The bathymetry information was collected from Canadian Hydrographic Services (CHS), and National Oceanic Atmospheric and Administration (NOAA). The arial survey was conducted using a Cessna-172 aircraft for the communities, and then the information was converted to generate a topographic digital elevation map. Both survey information was incorporated into the model, and the domain size of the model was about 1000km x 1300km. This model was calibrated with the tsunami occurred off the west coast of Moresby Island on October 28, 2012. The water levels from the model were compared with two tide gauge stations close to the Vancouver Island and the output from the model indicates the satisfactory result. For this study, the design water level was considered as High Water Level plus the Sea Level Rise for 2100 year. The hourly wind speeds from eight directions were collected from different wind stations and used a 200-year return period wind speed in the model for storm events. The regional model was set for 12 hrs simulation period, which takes more than 16 hrs to complete one simulation using double Xeon-E7 CPU computer plus a K-80 GPU. The boundary information for the local model was generated from the regional model. The local model was developed using a high resolution mesh to estimate the coastal flooding for the communities. It was observed from this study that many communities will be effected by the Cascadia tsunami and the inundation maps were developed for the communities. The infrastructures inside the coastal inundation area were identified. Coastal vulnerability planning and resilient design solutions will be implemented to significantly reduce the risk.Keywords: tsunami, coastal flooding, coastal vulnerable, earthquake, Vancouver, wave propagation
Procedia PDF Downloads 131487 Use of Locomotor Activity of Rainbow Trout Juveniles in Identifying Sublethal Concentrations of Landfill Leachate
Authors: Tomas Makaras, Gintaras Svecevičius
Abstract:
Landfill waste is a common problem as it has an economic and environmental impact even if it is closed. Landfill waste contains a high density of various persistent compounds such as heavy metals, organic and inorganic materials. As persistent compounds are slowly-degradable or even non-degradable in the environment, they often produce sublethal or even lethal effects on aquatic organisms. The aims of the present study were to estimate sublethal effects of the Kairiai landfill (WGS: 55°55‘46.74“, 23°23‘28.4“) leachate on the locomotor activity of rainbow trout Oncorhynchus mykiss juveniles using the original system package developed in our laboratory for automated monitoring, recording and analysis of aquatic organisms’ activity, and to determine patterns of fish behavioral response to sublethal effects of leachate. Four different concentrations of leachate were chosen: 0.125; 0.25; 0.5 and 1.0 mL/L (0.0025; 0.005; 0.01 and 0.002 as part of 96-hour LC50, respectively). Locomotor activity was measured after 5, 10 and 30 minutes of exposure during 1-minute test-periods of each fish (7 fish per treatment). The threshold-effect-concentration amounted to 0.18 mL/L (0.0036 parts of 96-hour LC50). This concentration was found to be even 2.8-fold lower than the concentration generally assumed to be “safe” for fish. At higher concentrations, the landfill leachate solution elicited behavioral response of test fish to sublethal levels of pollutants. The ability of the rainbow trout to detect and avoid contaminants occurred after 5 minutes of exposure. The intensity of locomotor activity reached a peak within 10 minutes, evidently decreasing after 30 minutes. This could be explained by the physiological and biochemical adaptation of fish to altered environmental conditions. It has been established that the locomotor activity of juvenile trout depends on leachate concentration and exposure duration. Modeling of these parameters showed that the activity of juveniles increased at higher leachate concentrations, but slightly decreased with the increasing exposure duration. Experiment results confirm that the behavior of rainbow trout juveniles is a sensitive and rapid biomarker that can be used in combination with the system for fish behavior monitoring, registration and analysis to determine sublethal concentrations of pollutants in ambient water. Further research should be focused on software improvement aimed to include more parameters of aquatic organisms’ behavior and to investigate the most rapid and appropriate behavioral responses in different species. In practice, this study could be the basis for the development and creation of biological early-warning systems (BEWS).Keywords: fish behavior biomarker, landfill leachate, locomotor activity, rainbow trout juveniles, sublethal effects
Procedia PDF Downloads 271486 Modulation of Receptor-Activation Due to Hydrogen Bond Formation
Authors: Sourav Ray, Christoph Stein, Marcus Weber
Abstract:
A new class of drug candidates, initially derived from mathematical modeling of ligand-receptor interactions, activate the μ-opioid receptor (MOR) preferentially at acidic extracellular pH-levels, as present in injured tissues. This is of commercial interest because it may preclude the adverse effects of conventional MOR agonists like fentanyl, which include but are not limited to addiction, constipation, sedation, and apnea. Animal studies indicate the importance of taking the pH value of the chemical environment of MOR into account when designing new drugs. Hydrogen bonds (HBs) play a crucial role in stabilizing protein secondary structure and molecular interaction, such as ligand-protein interaction. These bonds may depend on the pH value of the chemical environment. For the MOR, antagonist naloxone and agonist [D-Ala2,N-Me-Phe4,Gly5-ol]-enkephalin (DAMGO) form HBs with ionizable residue HIS 297 at physiological pH to modulate signaling. However, such interactions were markedly reduced at acidic pH. Although fentanyl-induced signaling is also diminished at acidic pH, HBs with HIS 297 residue are not observed at either acidic or physiological pH for this strong agonist of the MOR. Molecular dynamics (MD) simulations can provide greater insight into the interaction between the ligand of interest and the HIS 297 residue. Amino acid protonation states are adjusted to the model difference in system acidity. Unbiased and unrestrained MD simulations were performed, with the ligand in the proximity of the HIS 297 residue. Ligand-receptor complexes were embedded in 1-palmitoyl-2-oleoyl-sn glycero-3-phosphatidylcholine (POPC) bilayer to mimic the membrane environment. The occurrence of HBs between the different ligands and the HIS 297 residue of MOR at acidic and physiological pH values were tracked across the various simulation trajectories. No HB formation was observed between fentanyl and HIS 297 residue at either acidic or physiological pH. Naloxone formed some HBs with HIS 297 at pH 5, but no such HBs were noted at pH 7. Interestingly, DAMGO displayed an opposite yet more pronounced HB formation trend compared to naloxone. Whereas a marginal number of HBs could be observed at even pH 5, HBs with HIS 297 were more stable and widely present at pH 7. The HB formation plays no and marginal role in the interaction of fentanyl and naloxone, respectively, with the HIS 297 residue of MOR. However, HBs play a significant role in the DAMGO and HIS 297 interaction. Post DAMGO administration, these HBs might be crucial for the remediation of opioid tolerance and restoration of opioid sensitivity. Although experimental studies concur with our observations regarding the influence of HB formation on the fentanyl and DAMGO interaction with HIS 297, the same could not be conclusively stated for naloxone. Therefore, some other supplementary interactions might be responsible for the modulation of the MOR activity by naloxone binding at pH 7 but not at pH 5. Further elucidation of the mechanism of naloxone action on the MOR could assist in the formulation of cost-effective naloxone-based treatment of opioid overdose or opioid-induced side effects.Keywords: effect of system acidity, hydrogen bond formation, opioid action, receptor activation
Procedia PDF Downloads 175485 Electron Bernstein Wave Heating in the Toroidally Magnetized System
Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten
Abstract:
The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS
Procedia PDF Downloads 96484 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 34483 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending
Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim
Abstract:
Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions
Procedia PDF Downloads 104482 The MHz Frequency Range EM Induction Device Development and Experimental Study for Low Conductive Objects Detection
Authors: D. Kakulia, L. Shoshiashvili, G. Sapharishvili
Abstract:
The results of the study are related to the direction of plastic mine detection research using electromagnetic induction, the development of appropriate equipment, and the evaluation of expected results. Electromagnetic induction sensing is effectively used in the detection of metal objects in the soil and in the discrimination of unexploded ordnances. Metal objects interact well with a low-frequency alternating magnetic field. Their electromagnetic response can be detected at the low-frequency range even when they are placed in the ground. Detection of plastic things such as plastic mines by electromagnetic induction is associated with difficulties. The interaction of non-conducting bodies or low-conductive objects with a low-frequency alternating magnetic field is very weak. At the high-frequency range where already wave processes take place, the interaction increases. Interactions with other distant objects also increase. A complex interference picture is formed, and extraction of useful information also meets difficulties. Sensing by electromagnetic induction at the intermediate MHz frequency range is the subject of research. The concept of detecting plastic mines in this range can be based on the study of the electromagnetic response of non-conductive cavity in a low-conductivity environment or the detection of small metal components in plastic mines, taking into account constructive features. The detector node based on the amplitude and phase detector 'Analog Devices ad8302' has been developed for experimental studies. The node has two inputs. At one of the inputs, the node receives a sinusoidal signal from the generator, to which a transmitting coil is also connected. The receiver coil is attached to the second input of the node. The additional circuit provides an option to amplify the signal output from the receiver coil by 20 dB. The node has two outputs. The voltages obtained at the output reflect the ratio of the amplitudes and the phase difference of the input harmonic signals. Experimental measurements were performed in different positions of the transmitter and receiver coils at the frequency range 1-20 MHz. Arbitrary/Function Generator Tektronix AFG3052C and the eight-channel high-resolution oscilloscope PICOSCOPE 4824 were used in the experiments. Experimental measurements were also performed with a low-conductive test object. The results of the measurements and comparative analysis show the capabilities of the simple detector node and the prospects for its further development in this direction. The results of the experimental measurements are compared and analyzed with the results of appropriate computer modeling based on the method of auxiliary sources (MAS). The experimental measurements are driven using the MATLAB environment. Acknowledgment -This work was supported by Shota Rustaveli National Science Foundation (SRNSF) (Grant number: NFR 17_523).Keywords: EM induction sensing, detector, plastic mines, remote sensing
Procedia PDF Downloads 149481 Post-Soviet LULC Analysis of Tbilisi, Batumi and Kutaisi Using of Remote Sensing and Geo Information System
Authors: Lela Gadrani, Mariam Tsitsagi
Abstract:
Human is a part of the urban landscape and responsible for it. Urbanization of cities includes the longest phase; thus none of the environment ever undergoes such anthropogenic impact as the area of large cities. The post-Soviet period is very interesting in terms of scientific research. The changes that have occurred in the cities since the collapse of the Soviet Union have not yet been analyzed best to our knowledge. In this context, the aim of this paper is to analyze the changes in the land use of the three large cities of Georgia (Tbilisi, Kutaisi, Batumi). Tbilisi as a capital city, Batumi as a port city, and Kutaisi as a former industrial center. Data used during the research process are conventionally divided into satellite and supporting materials. For this purpose, the largest topographic maps (1:10 000) of all three cities were analyzed, Tbilisi General Plans (1896, 1924), Tbilisi and Kutaisi historical maps. The main emphasis was placed on the classification of Landsat images. In this case, we have classified the images LULC (LandUse / LandCover) of all three cities taken in 1987 and 2016 using the supervised and unsupervised methods. All the procedures were performed in the programs: Arc GIS 10.3.1 and ENVI 5.0. In each classification we have singled out the following classes: built-up area, water bodies, agricultural lands, green cover and bare soil, and calculated the areas occupied by them. In order to check the validity of the obtained results, additionally we used the higher resolution images of CORONA and Sentinel. Ultimately we identified the changes that took place in the land use in the post-Soviet period in the above cities. According to the results, a large wave of changes touched Tbilisi and Batumi, though in different periods. It turned out that in the case of Tbilisi, the area of developed territory has increased by 13.9% compared to the 1987 data, which is certainly happening at the expense of agricultural land and green cover, in particular, the area of agricultural lands has decreased by 4.97%; and the green cover by 5.67%. It should be noted that Batumi has obviously overtaken the country's capital in terms of development. With the unaided eye it is clear that in comparison with other regions of Georgia, everything is different in Batumi. In fact, Batumi is an unofficial summer capital of Georgia. Undoubtedly, Batumi’s development is very important both in economic and social terms. However, there is a danger that in the uneven conditions of urban development, we will eventually get a developed center - Batumi, and multiple underdeveloped peripheries around it. Analysis of the changes in the land use is of utmost importance not only for quantitative evaluation of the changes already implemented, but for future modeling and prognosis of urban development. Raster data containing the classes of land use is an integral part of the city's prognostic models.Keywords: analysis, geo information system, remote sensing, LULC
Procedia PDF Downloads 451480 The Current Application of BIM - An Empirical Study Focusing on the BIM-Maturity Level
Authors: Matthias Stange
Abstract:
Building Information Modelling (BIM) is one of the most promising methods in the building design process and plays an important role in the digitalization of the Architectural, Engineering, and Construction (AEC) Industry. The application of BIM is seen as the key enabler for increasing productivity in the construction industry. The model-based collaboration using the BIM method is intended to significantly reduce cost increases, schedule delays, and quality problems in the planning and construction of buildings. Numerous qualitative studies based on expert interviews support this theory and report perceived benefits from the use of BIM in terms of achieving project objectives related to cost, schedule, and quality. However, there is a large research gap in analysing quantitative data collected from real construction projects regarding the actual benefits of applying BIM based on representative sample size and different application regions as well as different project typologies. In particular, the influence of the project-related BIM maturity level is completely unexplored. This research project examines primary data from 105 construction projects worldwide using quantitative research methods. Projects from the areas of residential, commercial, and industrial construction as well as infrastructure and hydraulic engineering were examined in application regions North America, Australia, Europe, Asia, MENA region, and South America. First, a descriptive data analysis of 6 independent project variables (BIM maturity level, application region, project category, project type, project size, and BIM level) were carried out using statistical methods. With the help of statisticaldata analyses, the influence of the project-related BIM maturity level on 6 dependent project variables (deviation in planning time, deviation in construction time, number of planning collisions, frequency of rework, number of RFIand number of changes) was investigated. The study revealed that most of the benefits of using BIM perceived through numerous qualitative studies have not been confirmed. The results of the examined sample show that the application of BIM did not have an improving influence on the dependent project variables, especially regarding the quality of the planning itself and the adherence to the schedule targets. The quantitative research suggests the conclusion that the BIM planning method in its current application has not (yet) become a recognizable increase in productivity within the planning and construction process. The empirical findings indicate that this is due to the overall low level of BIM maturity in the projects of the examined sample. As a quintessence, the author suggests that the further implementation of BIM should primarily focus on an application-oriented and consistent development of the project-related BIM maturity level instead of implementing BIM for its own sake. Apparently, there are still significant difficulties in the interweaving of people, processes, and technology.Keywords: AEC-process, building information modeling, BIM maturity level, project results, productivity of the construction industry
Procedia PDF Downloads 73479 Hybrid Manufacturing System to Produce 3D Structures for Osteochondral Tissue Regeneration
Authors: Pedro G. Morouço
Abstract:
One utmost challenge in Tissue Engineering is the production of 3D constructs capable of mimicking the functional hierarchy of native tissues. This is well stated for osteochondral tissue due to the complex mechanical functional unit based on the junction of articular cartilage and bone. Thus, the aim of the present study was to develop a new additive manufacturing system coupling micro-extrusion with hydrogels printing. An integrated system was developed with 2 main features: (i) the printing of up to three distinct hydrogels; (ii) in coordination with the printing of a thermoplastic structural support. The hydrogel printing module was projected with a ‘revolver-like’ system, where the hydrogel selection was made by a rotating mechanism. The hydrogel deposition was then controlled by pressured air input. The use of specific components approved for medical use was incorporated in the material dispensing system (Nordson EDF Optimum® fluid dispensing system). The thermoplastic extrusion modulus enabled the control of required extrusion temperature through electric resistances in the polymer reservoir and the extrusion system. After testing and upgrades, a hydrogel modulus with 3 syringes (3cm3 capacity each), with a pressure range of 0-2.5bar, a rotational speed of 0-5rpm, and working with needles from 200-800µm was obtained. This modulus was successfully coupled to the extrusion system that presented a temperature up to 300˚C, a pressure range of 0-12bar, and working with nozzles from 200-500µm. The applied motor could provide a velocity range 0-2000mm/min. Although, there are distinct printing requirements for hydrogels and polymers, the novel system could develop hybrid scaffolds, combining the 2 moduli. The morphological analysis showed high reliability (n=5) between the theoretical and obtained filament and pore size (350µm and 300µm vs. 342±4µm and 302±3µm, p>0.05, respectively) of the polymer; and multi-material 3D constructs were successfully obtained. Human tissues present very distinct and complex structures regarding their mechanical properties, organization, composition and dimensions. For osteochondral regenerative medicine, a multiphasic scaffold is required as subchondral bone and overlying cartilage must regenerate at the same time. Thus, a scaffold with 3 layers (bone, intermediate and cartilage parts) can be a promising approach. The developed system may give a suitable solution to construct those hybrid scaffolds with enhanced properties. The present novel system is a step-forward regarding osteochondral tissue engineering due to its ability to generate layered mechanically stable implants through the double-printing of hydrogels with thermoplastics.Keywords: 3D bioprinting, bone regeneration, cartilage regeneration, regenerative medicine, tissue engineering
Procedia PDF Downloads 166478 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels
Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand
Abstract:
The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing
Procedia PDF Downloads 312477 Implementation of European Court of Human Right Judgments and State Sovereignty
Authors: Valentina Tereshkova
Abstract:
The paper shows how the relationship between international law and national sovereignty is viewed through the implementation of European Court of Human Right judgments. Methodology: Сonclusions are based on a survey of representatives of the legislative authorities and judges of the Krasnoyarsk region, the Rostov region, Sverdlovsk region and Tver region. The paper assesses the activities of the Russian Constitutional Court from 1998 to 2015 related to the establishment of the implementation mechanism and the Russian Constitutional Court judgments of 14.07.2015, № 21-P and of 19.04.2016, № 12-P where the Constitutional Court stated the impossibility of executing ECtHR judgments. I. Implementation of ECHR judgments by courts and other authorities. Despite the publication of the report of the RF Ministry of Justice on the implementation, we could not find any formal information on the Russian policy of the ECtHR judgment implementation. Using the results of the survey, the paper shows the effect of ECtHR judgments on law and legal practice in Russia. II. Implementation of ECHR judgments by Russian Constitutional Court. Russian Constitutional Court had implemented the ECtHR judgments. However, the Court determined on July, 14, 2015 its competence to consider the question of implementation of ECHR judgments. Then, it stated that the execution of the judgment [Anchugov and Gladkov case] was impossible because the Russian Constitution has the highest legal force on April, 19, 2016. Recently the CE Committee of Ministers asked Russia to provide ‘without further delay’ a compensation plan for the Yukos case. On November 11, 2016, Constitutional Court accepted a request from the Ministry of Justice to consider the possibility of execution of the ECtHR judgment in the Yukos case. Such a request has been made possible due to a lack of implementation mechanism. Conclusion: ECtHR judgments are as an effective tool to solve the structural problems of a legal system. However, Russian experts consider the ECHR as a tool of protection of individual rights. The paper shows link between the survey results and the absence of the implementation mechanism. New Article 104 par. 2 and Article 106 par. 2 of the Federal Law of the Constitutional Court are in conflict with international obligations of the Convention on the Law on Treaties 1969 and Article 46 ECHR. Nevertheless, a dialogue may be possible between Constitutional Court and the ECtHR. In its judgment [19.04.2016] the Constitutional Court determined that the general measures to ensure fairness, proportionality and differentiation of the restrictions of voting rights were possible in judicial practice. It also stated the federal legislator had the power ‘to optimize the system of Russian criminal penalties’. Despite the fact that the Constitutional Court presented the Görgülü case [Görgülü v Germany] as an example of non-execution of the ECtHR judgment, the paper proposes to draw on the experience of German Constitutional Court, which in the Görgülü case, on the one hand, stressed national sovereignty and, on the other hand, took advantage of this sovereignty, to resolve the issue in accordance with the ECHR.Keywords: implementation of ECtHR judgments, sovereignty, supranational jurisdictions, principle of subsidiarity
Procedia PDF Downloads 194476 In vitro Antioxidant, Anti-Diabetic and Nutritional Properties of Breynia retusa
Authors: Parimelazhagan Thangaraj
Abstract:
Natural products serves human kind as a source of all drugs and higher plants provide most of these therapeutic agents. These products are widely recognized in the pharmaceutical industry for their broad structural diversity as well as their wide range of pharmacological activities. Euphorbiaceae is one of the important families with significant pharmacological activities, of which many species has been used traditionally for the treatment of various ailments. Breynia retusa belongs to the family Euphorbiaceae is used to cure ailments like body pain, skin inflammation, hyperglycaemia, diarrhoea, dysentery and toothache. Flowers and young leaves of B. retusa are cooked and eaten, roots are used for meningitis. The juice of the stem is used in conjunctivtis and leaves as poultice to hasten suppuration. Based on the strong evidences of traditional uses of Breynia retusa, the present study was focused on neutraceuticals evaluation of the species with special reference to oxidative stress and diabetes. Both leaves and stem of B. retusa were extracted with different solvents and analyzed for radical scavenging ability wherein ABTS.+ (8396.95±1529.01 µM TEAC/g extract), phosphomolybdenum (17.34±0.08 g AAE/100 g extract) and FRAP (6075.66±414.28 µM Fe (II) E/mg extract) assays showed good radical scavenging activity in stem. Furthermore, leaf extracts showed good radical inhibition in DPPH (2.4 µg/mL), metal ion (27.44±0.09 mg EDTAE/g extract) scavenging methods. The α-amylase and α-glucosidase inhibitors are currently used for diabetic treatment as oral hypoglycemic agents. The inhibitory effects of the B. retusa leaf and stem ethyl acetate extracts showed good inhibition on α-amylase (96.25% and 95.69 respectively) and α-glucosidase (54.50% and 50.87% respectively) enzymes compared to standard acarbose. The proximate composition analysis of B. retusa leaves contains higher amount of total carbohydrates (14.08 g Glucose equivalents/100 g sample), ash (19.04 %) and crude fibre (0.52 %). The examination of mineral profile explored that the leaves was rich in calcium (1891 ppm), sulphur (1406 ppm), copper (2600 ppm) and magnesium (778 ppm). Leaves sample revealed very minimal amount of anti-nutrient contents like trypsin (14.08±0.03 TIU/mg protein) and tannin (0.011±0.001 mg TAE/g sample). The low anti nutritional factors may not pose any serious nutritional problems when these leaves are consumed. In conclusion, it is very clear that dietary compounds from B. retusa are suitable and promising for the development of safe food products and natural additives. Based on the studies, it may be concluded that nutritional composition, antioxidant and anti-diabetic activities this species can be used as future therapeutic medicine.Keywords: Breynia retusa, nutraceuticals, antioxidant, anti diabetic
Procedia PDF Downloads 332475 Tectonic Setting of Hinterland and Foreland Basins According to Tectonic Vergence in Eastern Iran
Authors: Shahriyar Keshtgar, Mahmoud Reza Heyhat, Sasan Bagheri, Ebrahim Gholami, Seyed Naser Raiisosadat
Abstract:
Various tectonic interpretations have been presented by different researchers to explain the geological evolution of eastern Iran, but there are still many ambiguities and many disagreements about the geodynamic nature of the Paleogene mountain range of eastern Iran. The purpose of this research is to clarify and discuss the tectonic position of the foreland and hinterland regions of eastern Iran from the tectonic perspective of sedimentary basins. In the tectonic model of oceanic subduction crust under the Afghan block, the hinterland is located to the east and on the Afghan block, and the foreland is located on the passive margin of the Sistan open ocean in the west. After the collision of the two microcontinents, the foreland basin must be located somewhere on the passive margin of the Lut block. This basin can deposit thick Paleocene to Oligocene sediments on the Cretaceous and older sediments. Thrust faults here will move towards the west. If we accept the subduction model of the Sistan Ocean under the Lut Block, the hinterland is located to the west towards the Lut Block, and the foreland basin is located towards the Sistan Ocean in the east. After the collision of the two microcontinents, the foreland basin with Paleogene sediments should expand on the Sefidaba basin. Thrust faults here will move towards the east. If we consider the two-sided subduction model of the ocean crust under both Lut and Afghan continental blocks, the tectonic position of the foreland and hinterland basins will not change and will be similar to the one-sided subduction models. After the collision of two microcontinents, the foreland basin should develop in the central part of the eastern Iranian orogen. In the oroclinic buckling model, the foreland basin will continue not only in the east and west but continuously in the north as well. In this model, since there is practically no collision, the foreland basin is not developed, and the remnants of the Sistan Ocean ophiolites and their deep turbidite sediments appear in the axial part of the mountain range, where the Neh and Khash complexes are located. The structural data from this research in the northern border of the Sistan belt and the Lut block indicate the convergence of the tectonic vergence directions towards the interior of the Sistan belt (in the Ahangaran area towards the southwest, in the north of Birjand towards the south-southeast, in the Sechengi area to the southeast). According to this research, not only the general movement of thrust sheets do not follow the linear orogeny models, but the expected active foreland basins have not been formed in the mentioned places in eastern Iran. Therefore, these results do not follow previous tectonic models for eastern Iran (i.e., rifting of eastern Iran continental crust and subsequent linear collision of the Lut and Afghan blocks), but it seems that was caused by buckling model in the Late Eocene-Oligocene.Keywords: foreland, hinterland, tectonic vergence, orocline buckling, eastern Iran
Procedia PDF Downloads 67474 Comparison between the Quadratic and the Cubic Linked Interpolation on the Mindlin Plate Four-Node Quadrilateral Finite Elements
Authors: Dragan Ribarić
Abstract:
We employ the so-called problem-dependent linked interpolation concept to develop two cubic 4-node quadrilateral Mindlin plate finite elements with 12 external degrees of freedom. In the problem-independent linked interpolation, the interpolation functions are independent of any problem material parameters and the rotation fields are not expressed in terms of the nodal displacement parameters. On the contrary, in the problem-dependent linked interpolation, the interpolation functions depend on the material parameters and the rotation fields are expressed in terms of the nodal displacement parameters. Two cubic 4-node quadrilateral plate elements are presented, named Q4-U3 and Q4-U3R5. The first one is modelled with one displacement and two rotation degrees of freedom in every of the four element nodes and the second element has five additional internal degrees of freedom to get polynomial completeness of the cubic form and which can be statically condensed within the element. Both elements are able to pass the constant-bending patch test exactly as well as the non-zero constant-shear patch test on the oriented regular mesh geometry in the case of cylindrical bending. In any mesh shape, the elements have the correct rank and only the three eigenvalues, corresponding to the solid body motions are zero. There are no additional spurious zero modes responsible for instability of the finite element models. In comparison with the problem-independent cubic linked interpolation implemented in Q9-U3, the nine-node plate element, significantly less degrees of freedom are employed in the model while retaining the interpolation conformity between adjacent elements. The presented elements are also compared to the existing problem-independent quadratic linked-interpolation element Q4-U2 and to the other known elements that also use the quadratic or the cubic linked interpolation, by testing them on several benchmark examples. Simple functional upgrading from the quadratic to the cubic linked interpolation, implemented in Q4-U3 element, showed no significant improvement compared to the quadratic linked form of the Q4-U2 element. Only when the additional bubble terms are incorporated in the displacement and rotation function fields, which complete the full cubic linked interpolation form, qualitative improvement is fulfilled in the Q4-U3R5 element. Nevertheless, the locking problem exists even for the both presented elements, like in all pure displacement elements when applied to very thin plates modelled by coarse meshes. But good and even slightly better performance can be noticed for the Q4-U3R5 element when compared with elements from the literature, if the model meshes are moderately dense and the plate thickness not extremely thin. In some cases, it is comparable to or even better than Q9-U3 element which has as many as 12 more external degrees of freedom. A significant improvement can be noticed in particular when modeling very skew plates and models with singularities in the stress fields as well as circular plates with distorted meshes.Keywords: Mindlin plate theory, problem-independent linked interpolation, problem-dependent interpolation, quadrilateral displacement-based plate finite elements
Procedia PDF Downloads 312473 The Effect of Lead(II) Lone Electron Pair and Non-Covalent Interactions on the Supramolecular Assembly and Fluorescence Properties of Pb(II)-Pyrrole-2-Carboxylato Polymer
Authors: M. Kowalik, J. Masternak, K. Kazimierczuk, O. V. Khavryuchenko, B. Kupcewicz, B. Barszcz
Abstract:
Recently, the growing interest of chemists in metal-organic coordination polymers (MOCPs) is primarily derived from their intriguing structures and potential applications in catalysis, gas storage, molecular sensing, ion exchanges, nonlinear optics, luminescence, etc. Currently, we are devoting considerable effort to finding the proper method of synthesizing new coordination polymers containing S- or N-heteroaromatic carboxylates as linkers and characterizing the obtained Pb(II) compounds according to their structural diversity, luminescence, and thermal properties. The choice of Pb(II) as the central ion of MOCPs was motivated by several reasons mentioned in the literature: i) a large ionic radius allowing for a wide range of coordination numbers, ii) the stereoactivity of the 6s2 lone electron pair leading to a hemidirected or holodirected geometry, iii) a flexible coordination environment, and iv) the possibility to form secondary bonds and unusual non-covalent interactions, such as classic hydrogen bonds and π···π stacking interactions, as well as nonconventional hydrogen bonds and rarely reported tetrel bonds, Pb(lone pair)···π interactions, C–H···Pb agostic-type interactions or hydrogen bonds, and chelate ring stacking interactions. Moreover, the construction of coordination polymers requires the selection of proper ligands acting as linkers, because we are looking for materials exhibiting different network topologies and fluorescence properties, which point to potential applications. The reaction of Pb(NO₃)₂ with 1H-pyrrole-2-carboxylic acid (2prCOOH) leads to the formation of a new four-nuclear Pb(II) polymer, [Pb4(2prCOO)₈(H₂O)]ₙ, which has been characterized by CHN, FT-IR, TG, PL and single-crystal X-ray diffraction methods. In view of the primary Pb–O bonds, Pb1 and Pb2 show hemidirected pentagonal pyramidal geometries, while Pb2 and Pb4 display hemidirected octahedral geometries. The topology of the strongest Pb–O bonds was determined as the (4·8²) fes topology. Taking the secondary Pb–O bonds into account, the coordination number of Pb centres increased, Pb1 exhibited a hemidirected monocapped pentagonal pyramidal geometry, Pb2 and Pb4 exhibited a holodirected tricapped trigonal prismatic geometry, and Pb3 exhibited a holodirected bicapped trigonal prismatic geometry. Moreover, the Pb(II) lone pair stereoactivity was confirmed by DFT calculations. The 2D structure was expanded into 3D by the existence of non-covalent O/C–H···π and Pb···π interactions, which was confirmed by the Hirshfeld surface analysis. The above mentioned interactions improve the rigidity of the structure and facilitate the charge and energy transfer between metal centres, making the polymer a promising luminescent compound.Keywords: coordination polymers, fluorescence properties, lead(II), lone electron pair stereoactivity, non-covalent interactions
Procedia PDF Downloads 145472 Developing a Roadmap by Integrating of Environmental Indicators with the Nitrogen Footprint in an Agriculture Region, Hualien, Taiwan
Authors: Ming-Chien Su, Yi-Zih Chen, Nien-Hsin Kao, Hideaki Shibata
Abstract:
The major component of the atmosphere is nitrogen, yet atmospheric nitrogen has limited availability for biological use. Human activities have produced different types of nitrogen related compounds such as nitrogen oxides from combustion, nitrogen fertilizers from farming, and the nitrogen compounds from waste and wastewater, all of which have impacted the environment. Many studies have indicated the N-footprint is dominated by food, followed by housing, transportation, and goods and services sectors. To solve the impact issues from agricultural land, nitrogen cycle research is one of the key solutions. The study site is located in Hualien County, Taiwan, a major rice and food production area of Taiwan. Importantly, environmentally friendly farming has been promoted for years, and an environmental indicator system has been established by previous authors based on the concept of resilience capacity index (RCI) and environmental performance index (EPI). Nitrogen management is required for food production, as excess N causes environmental pollution. Therefore it is very important to develop a roadmap of the nitrogen footprint, and to integrate it with environmental indicators. The key focus of the study thus addresses (1) understanding the environmental impact caused by the nitrogen cycle of food products and (2) uncovering the trend of the N-footprint of agricultural products in Hualien, Taiwan. The N-footprint model was applied, which included both crops and energy consumption in the area. All data were adapted from government statistics databases and crosschecked for consistency before modeling. The actions involved with agricultural production were evaluated and analyzed for nitrogen loss to the environment, as well as measuring the impacts to humans and the environment. The results showed that rice makes up the largest share of agricultural production by weight, at 80%. The dominant meat production is pork (52%) and poultry (40%); fish and seafood were at similar levels to pork production. The average per capita food consumption in Taiwan is 2643.38 kcal capita−1 d−1, primarily from rice (430.58 kcal), meats (184.93 kcal) and wheat (ca. 356.44 kcal). The average protein uptake is 87.34 g capita−1 d−1, and 51% is mainly from meat, milk, and eggs. The preliminary results showed that the nitrogen footprint of food production is 34 kg N per capita per year, congruent with the results of Shibata et al. (2014) for Japan. These results provide a better understanding of the nitrogen demand and loss in the environment, and the roadmap can furthermore support the establishment of nitrogen policy and strategy. Additionally, the results serve to develop a roadmap of the nitrogen cycle of an environmentally friendly farming area, thus illuminating the nitrogen demand and loss of such areas.Keywords: agriculture productions, energy consumption, environmental indicator, nitrogen footprint
Procedia PDF Downloads 302471 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction
Authors: Alisawi Alaa T., Collins P. E. F.
Abstract:
The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard
Procedia PDF Downloads 100470 Polypyrrole as Bifunctional Materials for Advanced Li-S Batteries
Authors: Fang Li, Jiazhao Wang, Jianmin Ma
Abstract:
The practical application of Li-S batteries is hampered due to poor cycling stability caused by electrolyte-dissolved lithium polysulfides. Dual functionalities such as strong chemical adsorption stability and high conductivity are highly desired for an ideal host material for a sulfur-based cathode. Polypyrrole (PPy), as a conductive polymer, was widely studied as matrixes for sulfur cathode due to its high conductivity and strong chemical interaction with soluble polysulfides. Thus, a novel cathode structure consisting of a free-standing sulfur-polypyrrole cathode and a polypyrrole coated separator was designed for flexible Li-S batteries. The PPy materials show strong interaction with dissoluble polysulfides, which could suppress the shuttle effect and improve the cycling stability. In addition, the synthesized PPy film with a rough surface acts as a current collector, which improves the adhesion of sulfur materials and restrain the volume expansion, enhancing the structural stability during the cycling process. For further enhancing the cycling stability, a PPy coated separator was also applied, which could make polysulfides into the cathode side to alleviate the shuttle effect. Moreover, the PPy layer coated on commercial separator is much lighter than other reported interlayers. A soft-packaged flexible Li-S battery has been designed and fabricated for testing the practical application of the designed cathode and separator, which could power a device consisting of 24 light-emitting diode (LED) lights. Moreover, the soft-packaged flexible battery can still show relatively stable cycling performance after repeated bending, indicating the potential application in flexible batteries. A novel vapor phase deposition method was also applied to prepare uniform polypyrrole layer coated sulfur/graphene aerogel composite. The polypyrrole layer simultaneously acts as host and adsorbent for efficient suppression of polysulfides dissolution through strong chemical interaction. The density functional theory (DFT) calculations reveal that the polypyrrole could trap lithium polysulfides through stronger bonding energy. In addition, the deflation of sulfur/graphene hydrogel during the vapor phase deposition process enhances the contact of sulfur with matrixes, resulting in high sulfur utilization and good rate capability. As a result, the synthesized polypyrrole coated sulfur/graphene aerogel composite delivers a specific discharge capacity of 1167 mAh g⁻¹ and 409.1 mAh g⁻¹ at 0.2 C and 5 C respectively. The capacity can maintain at 698 mAh g⁻¹ at 0.5 C after 500 cycles, showing an ultra-slow decay rate of 0.03% per cycle.Keywords: polypyrrole, strong chemical interaction, long-term stability, Li-S batteries
Procedia PDF Downloads 140469 (Re)Processing of ND-Fe-B Permanent Magnets Using Electrochemical and Physical Approaches
Authors: Kristina Zuzek, Xuan Xu, Awais Ikram, Richard Sheridan, Allan Walton, Saso Sturm
Abstract:
Recycling of end-of-life REEs based Nd-Fe-B magnets is an important strategy for reducing the environmental dangers associated with rare-earth mining and overcoming the well-documented supply risks related to the REEs. However, challenges on their reprocessing still remain. We report on the possibility of direct electrochemical recycling and reprocessing of Nd-Fe(B)-based magnets. In this investigation, we were able first to electrochemically leach the end-of-life NdFeB magnet and to electrodeposit Nd–Fe using a 1-ethyl-3-methyl imidazolium dicyanamide ([EMIM][DCA]) ionic liquid-based electrolyte. We observed that Nd(III) could not be reduced independently. However, it can be co-deposited on a substrate with the addition of Fe(II). Using advanced TEM techniques of electron-energy-loss spectroscopy (EELS) it was shown that Nd(III) is reduced to Nd(0) during the electrodeposition process. This gave a new insight into determining the Nd oxidation state, as X-ray photoelectron spectroscopy (XPS) has certain limitations. This is because the binding energies of metallic Nd (Nd0) and neodymium oxide (Nd₂O₃) are very close, i. e., 980.5-981.5 eV and 981.7-982.3 eV, respectively, making it almost impossible to differentiate between the two states. These new insights into the electrodeposition process represent an important step closer to efficient recycling of rare piles of earth in metallic form at mild temperatures, thus providing an alternative to high-temperature molten-salt electrolysis and a step closer to deposit Nd-Fe-based magnetic materials. Further, we propose a new concept of recycling the sintered Nd-Fe-B magnets by direct recovering the 2:14:1 matrix phase. Via an electrochemical etching method, we are able to recover pure individual 2:14:1 grains that can be re-used for new types of magnet production. In the frame of physical reprocessing, we have successfully synthesized new magnets out of hydrogen (HDDR)-recycled stocks with a contemporary technique of pulsed electric current sintering (PECS). The optimal PECS conditions yielded fully dense Nd-Fe-B magnets with the coercivity Hc = 1060 kA/m, which was boosted to 1160 kA/m after the post-PECS thermal treatment. The Br and Hc were tackled further and increased applied pressures of 100 – 150 MPa resulted in Br = 1.01 T. We showed that with a fine tune of the PECS and post-annealing it is possible to revitalize the Nd-Fe-B end-of-life magnets. By applying advanced TEM, i.e. atomic-scale Z-contrast STEM combined with EDXS and EELS, the resulting magnetic properties were critically assessed against various types of structural and compositional discontinuities down to atomic-scale, which we believe control the microstructure evolution during the PECS processing route.Keywords: electrochemistry, Nd-Fe-B, pulsed electric current sintering, recycling, reprocessing
Procedia PDF Downloads 157468 Using a Card Game as a Tool for Developing a Design
Authors: Matthias Haenisch, Katharina Hermann, Marc Godau, Verena Weidner
Abstract:
Over the past two decades, international music education has been characterized by a growing interest in informal learning for formal contexts and a "compositional turn" that has moved from closed to open forms of composing. This change occurs under social and technological conditions that permeate 21st-century musical practices. This forms the background of Musical Communities in the (Post)Digital Age (MusCoDA), a four-year joint research project of the University of Erfurt (UE) and the University of Education Karlsruhe (PHK), funded by the German Federal Ministry of Education and Research (BMBF). Both explore songwriting processes as an example of collective creativity in (post)digital communities, one in formal and the other in informal learning contexts. Collective songwriting will be studied from a network perspective, that will allow us to view boundaries between both online and offline as well as formal and informal or hybrid contexts as permeable and to reconstruct musical learning practices. By comparing these songwriting processes, possibilities for a pedagogical-didactic interweaving of different educational worlds are highlighted. Therefore, the subproject of the University of Erfurt investigates school music lessons with the help of interviews, videography, and network maps by analyzing new digital pedagogical and didactic possibilities. In the first step, the international literature on songwriting in the music classroom was examined for design development. The analysis focused on the question of which methods and practices are circulating in the current literature. Results from this stage of the project form the basis for the first instructional design that will help teachers in planning regular music classes and subsequently reconstruct musical learning practices under these conditions. In analyzing the literature, we noticed certain structural methods and concepts that recur, such as the Building Blocks method and the pre-structuring of the songwriting process. From these findings, we developed a deck of cards that both captures the current state of research and serves as a method for design development. With this deck of cards, both teachers and students themselves can plan their individual songwriting lessons by independently selecting and arranging topic, structure, and action cards. In terms of science communication, music educators' interactions with the card game provide us with essential insights for developing the first design. The overall goal of MusCoDA is to develop an empirical model of collective musical creativity and learning and an instructional design for teaching music in the postdigital age.Keywords: card game, collective songwriting, community of practice, network, postdigital
Procedia PDF Downloads 64467 Phylogenetic Inferences based on Morphoanatomical Characters in Plectranthus esculentus N. E. Br. (Lamiaceae) from Nigeria
Authors: Otuwose E. Agyeno, Adeniyi A. Jayeola, Bashir A. Ajala
Abstract:
P. esculentus is indigenous to Nigeria yet no wild relation has been encountered or reported. This has made it difficult to establish proper lineages between the varieties and landraces under cultivation. The present work is the first to determine the apormophy of 135 morphoanatomical characters in organs of 46 accessions drawn from 23 populations of this species based on dicta. The character states were coded in accession x character-state matrices and only 83 were informative and utilised for neighbour joining clustering based on euclidean values, and heuristic search in parsimony analysis using PAST ver. 3.15 software. Compatibility and evolutionary trends between accessions were then explored from values and diagrams produced. The low consistency indices (CI) recorded support monophyly and low homoplasy in this taxon. Agglomerative schedules based on character type and source data sets divided the accessions into mainly 3 clades, each of complexes of accessions. Solenostemon rotundifolius (Poir) J.K Morton was the outgroup (OG) used, and it occurred within the largest clades except when the characters were combined in a data set. The OG showed better compatibility with accessions of populations of landrace Isci, and varieties Riyum and Long’at. Otherwise, its aerial parts are more consistent with those of accessions of variety Bebot. The highly polytomous clades produced due to anatomical data set may be an indication of how stable such characters are in this species. Strict consensus trees with more than 60 nodes outputted showed that the basal nodes were strongly supported by 3 to 17 characters across the data sets, suggesting that populations of this species are more alike. The OG was clearly the first diverging lineage and closely related to accessions of landrace Gwe and variety Bebot morphologically, but different from them anatomically. It was also distantly related to landrace Fina and variety Long’at in terms of root, stem and leaf structural attributes. There were at least 5 other clades with each comprising of complexes of accessions from different localities and terrains within the study area. Spherical stem in cross section, size of vascular bundles at the stem corners as well as the alternate and whorl phyllotaxy are attributes which may have facilitated each other’s evolution in all accessions of the landrace Gwe, and they may be innovative since such states are not characteristic of the larger Lamiaceae, and Plectranthus L’Her in particular. In conclusion, this study has provided valuable information about infraspecific diversity in this taxon. It supports recognition of the varietal statuses accorded to populations of P. esculentus, as well as the hypothesis that the wild gene might have been distributed on the Jos Plateau. However, molecular characterisation of accessions of populations of this species would resolve this problem better.Keywords: clustering, lineage, morphoanatomical characters, Nigeria, phylogenetics, Plectranthus esculentus, population
Procedia PDF Downloads 135466 Towards a Strategic Framework for State-Level Epistemological Functions
Authors: Mark Darius Juszczak
Abstract:
While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program
Procedia PDF Downloads 77465 Poly(propylene fumarate) Copolymers with Phosphonic Acid-based Monomers Designed as Bone Tissue Engineering Scaffolds
Authors: Görkem Cemali̇, Avram Aruh, Gamze Torun Köse, Erde Can ŞAfak
Abstract:
In order to heal bone disorders, the conventional methods which involve the use of autologous and allogenous bone grafts or permanent implants have certain disadvantages such as limited supply, disease transmission, or adverse immune response. A biodegradable material that acts as structural support to the damaged bone area and serves as a scaffold that enhances bone regeneration and guides bone formation is one desirable solution. Poly(propylene fumarate) (PPF) which is an unsaturated polyester that can be copolymerized with appropriate vinyl monomers to give biodegradable network structures, is a promising candidate polymer to prepare bone tissue engineering scaffolds. In this study, hydroxyl-terminated PPF was synthesized and thermally cured with vinyl phosphonic acid (VPA) and diethyl vinyl phosphonate (VPES) in the presence of radical initiator benzoyl peroxide (BP), with changing co-monomer weight ratios (10-40wt%). In addition, the synthesized PPF was cured with VPES comonomer at body temperature (37oC) in the presence of BP initiator, N, N-Dimethyl para-toluidine catalyst and varying amounts of Beta-tricalcium phosphate (0-20 wt% ß-TCP) as filler via radical polymerization to prepare composite materials that can be used in injectable forms. Thermomechanical properties, compressive properties, hydrophilicity and biodegradability of the PPF/VPA and PPF/VPES copolymers were determined and analyzed with respect to the copolymer composition. Biocompatibility of the resulting polymers and their composites was determined by the MTS assay and osteoblast activity was explored with von kossa, alkaline phosphatase and osteocalcin activity analysis and the effects of VPA and VPES comonomer composition on these properties were investigated. Thermally cured PPF/VPA and PPF/VPES copolymers with different compositions exhibited compressive modulus and strength values in the wide range of 10–836 MPa and 14–119 MPa, respectively. MTS assay studies showed that the majority of the tested compositions were biocompatible and the overall results indicated that PPF/VPA and PPF/VPES network polymers show significant potential for applications as bone tissue engineering scaffolds where varying PPF and co-monomer ratio provides adjustable and controllable properties of the end product. The body temperature cured PPF/VPES/ß-TCP composites exhibited significantly lower compressive modulus and strength values than the thermal cured PPF/VPES copolymers and were therefore found to be useful as scaffolds for cartilage tissue engineering applications.Keywords: biodegradable, bone tissue, copolymer, poly(propylene fumarate), scaffold
Procedia PDF Downloads 166464 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets
Authors: Debjit Ray
Abstract:
Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.Keywords: genomics, pathogens, genome assembly, superbugs
Procedia PDF Downloads 197463 Synthesis, Molecular Modeling and Study of 2-Substituted-4-(Benzo[D][1,3]Dioxol-5-Yl)-6-Phenylpyridazin-3(2H)-One Derivatives as Potential Analgesic and Anti-Inflammatory Agents
Authors: Jyoti Singh, Ranju Bansal
Abstract:
Fighting pain and inflammation is a common problem faced by physicians while dealing with a wide variety of diseases. Since ancient time nonsteroidal anti-inflammatory agents (NSAIDs) and opioids have been the cornerstone of treatment therapy, however, the usefulness of both these classes is limited due to severe side effects. NSAIDs, which are mainly used to treat mild to moderate inflammatory pain, induce gastric irritation and nephrotoxicity whereas opioids show an array of adverse reactions such as respiratory depression, sedation, and constipation. Moreover, repeated administration of these drugs induces tolerance to the analgesic effects and physical dependence. Further discovery of selective COX-2 inhibitors (coxibs) suggested safety without any ulcerogenic side effects; however, long-term use of these drugs resulted in kidney and hepatic toxicity along with an increased risk of secondary cardiovascular effects. The basic approaches towards inflammation and pain treatment are constantly changing, and researchers are continuously trying to develop safer and effective anti-inflammatory drug candidates for the treatment of different inflammatory conditions such as osteoarthritis, rheumatoid arthritis, ankylosing spondylitis, psoriasis and multiple sclerosis. Synthetic 3(2H)-pyridazinones constitute an important scaffold for drug discovery. Structure-activity relationship studies on pyridazinones have shown that attachment of a lactam at N-2 of the pyridazinone ring through a methylene spacer results in significantly increased anti-inflammatory and analgesic properties of the derivatives. Further introduction of the heterocyclic ring at lactam nitrogen results in improvement of biological activities. Keeping in mind these SAR studies, a new series of compounds were synthesized as shown in scheme 1 and investigated for anti-inflammatory, analgesic, anti-platelet activities and docking studies. The structures of newly synthesized compounds have been established by various spectroscopic techniques. All the synthesized pyridazinone derivatives exhibited potent anti-inflammatory and analgesic activity. Homoveratryl substituted derivative was found to possess highest anti-inflammatory and analgesic activity displaying 73.60 % inhibition of edema at 40 mg/kg with no ulcerogenic activity when compared to standard drugs indomethacin. Moreover, 2-substituted-4-benzo[d][1,3]dioxole-6-phenylpyridazin-3(2H)-ones derivatives did not produce significant changes in bleeding time and emerged as safe agents. Molecular docking studies also illustrated good binding interactions at the active site of the cyclooxygenase-2 (hCox-2) enzyme.Keywords: anti-inflammatory, analgesic, pyridazin-3(2H)-one, selective COX-2 inhibitors
Procedia PDF Downloads 200