Search results for: short-time quaternion offset linear canonical transform
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5024

Search results for: short-time quaternion offset linear canonical transform

1004 Characteristics and Drivers of Greenhouse Gas (GHG) emissions from China’s Manufacturing Industry: A Threshold Analysis

Authors: Rong Yuan, Zhao Tao

Abstract:

Only a handful of literature have used to non-linear model to investigate the influencing factors of greenhouse gas (GHG) emissions in China’s manufacturing sectors. And there is a limit in investigating quantitatively and systematically the mechanism of correlation between economic development and GHG emissions considering inherent differences among manufacturing sub-sectors. Considering the sectorial characteristics, the manufacturing sub-sectors with various impacts of output on GHG emissions may be explained by different development modes in each manufacturing sub-sector, such as investment scale, technology level and the level of international competition. In order to assess the environmental impact associated with any specific level of economic development and explore the factors that affect GHG emissions in China’s manufacturing industry during the process of economic growth, using the threshold Stochastic Impacts by Regression on Population, Affluence and Technology (STIRPAT) model, this paper investigated the influence impacts of GHG emissions for China’s manufacturing sectors of different stages of economic development. A data set from 28 manufacturing sectors covering an 18-year period was used. Results demonstrate that output per capita and investment scale contribute to increasing GHG emissions while energy efficiency, R&D intensity and FDI mitigate GHG emissions. Results also verify the nonlinear effect of output per capita on emissions as: (1) the Environmental Kuznets Curve (EKC) hypothesis is supported when threshold point RMB 31.19 million is surpassed; (2) the driving strength of output per capita on GHG emissions becomes stronger as increasing investment scale; (3) the threshold exists for energy efficiency with the positive coefficient first and negative coefficient later; (4) the coefficient of output per capita on GHG emissions decreases as R&D intensity increases. (5) FDI shows a reduction in elasticity when the threshold is compassed.

Keywords: China, GHG emissions, manufacturing industry, threshold STIRPAT model

Procedia PDF Downloads 426
1003 Factors Associated with Overweight and Obesity among Recipients of Antiretroviral Therapy at HIV Clinics in Botswana

Authors: Jose G. Tshikuka, Goabaone Rankgoane-Pono, Mgaywa G. M. D. Magafu, Julius C. Mwita, Tiny Masupe, Fortunat M. Kandanda, Shimeles G. Hamda, Roy Tapera, Mooketsi Molefi, John T. Tlhakanelo

Abstract:

Background: Factors associated with overweight and obesity among antiretroviral therapy (ART) recipients have not been sufficiently studied in Botswana. We aimed to study (i) the prevalence and trends in overweight/obesity by duration of exposure to ART among recipients, (ii) changes in body mass index (BMI) categories among recipients before ART initiation (BMI-1) and after ART initiation (BMI-2), (iii) associations between ART and overweight/obesity and (iv) factors associated with BMI changes among ART recipients. Methods: A 12 years retrospective record-based review was conducted. Factors potentially associated with BMI change among patients after at least three years of ART exposure were examined using multiple regression model. Adjusted odds ratios (AOR) and their 95% confidence intervals (CIs) were computed. ART regimens, duration of exposure to ART, and recipients’ demographic and biomedical characteristics including the presence or absence of diabetes mellitus related comorbidities (DRC) were investigated as potential factors associated with overweight/obesity. Results: Twenty-nine percent of recipients were overweight, 16.6% had obesity of whom 2.4% were morbidly-obese at the last clinic visit. Overweight/obesity recipients were more likely to be female, to have DRC and less likely to have nadir CD4 count or CD4 count between 201 – 249 cells/mm³. Neither the first-line nor the second-, third-line ART regimens predicted overweight/obesity more than the other and neither did the duration of exposure to ART. No significant linear trends were observed in the prevalence of overweight/obesity by the duration of exposure to ART. Conclusions: These results indicate that overweight/obesity seen among ART recipients is not directly induced by ART. ART used CD4 and/or DRC pathway to induce overweight/obesity seen among recipients; suggesting that, weight gain documented herein is likely a reflection of improved health status that mirrors trends in the general population or a DRC related effect. Weight management programs may be important components of HIV care.

Keywords: overweight/obesity, recipients of antiretroviral therapy, HIV/AIDS, Botswana

Procedia PDF Downloads 155
1002 Dissolution Kinetics of Chevreul’s Salt in Ammonium Cloride Solutions

Authors: Mustafa Sertçelik, Turan Çalban, Hacali Necefoğlu, Sabri Çolak

Abstract:

In this study, Chevreul’s salt solubility and its dissolution kinetics in ammonium chloride solutions were investigated. Chevreul’s salt that we used in the studies was obtained by using the optimum conditions (ammonium sulphide concentration; 0,4 M, copper sulphate concentration; 0,25 M, temperature; 60°C, stirring speed; 600 rev/min, pH; 4 and reaction time; 15 mins) determined by T. Çalban et al. Chevreul’s salt solubility in ammonium chloride solutions and the kinetics of dissolution were investigated. The selected parameters that affect solubility were reaction temperature, concentration of ammonium chloride, stirring speed, and solid/liquid ratio. Correlation of experimental results had been achieved using linear regression implemented in the statistical package program statistica. The effect of parameters on Chevreul’s salt solubility was examined and integrated rate expression of dissolution rate was found using kinetic models in solid-liquid heterogeneous reactions. The results revealed that the dissolution rate of Chevreul’s salt was decreasing while temperature, concentration of ammonium chloride and stirring speed were increasing. On the other hand, dissolution rate was found to be decreasing with the increase of solid/liquid ratio. Based on result of the applications of the obtained experimental results to the kinetic models, we can deduce that Chevreul’s salt dissolution rate is controlled by diffusion through the ash (or product layer). Activation energy of the reaction of dissolution was found as 74.83 kJ/mol. The integrated rate expression along with the effects of parameters on Chevreul's salt solubility was found to be as follows: 1-3(1-X)2/3+2(1-X)= [2,96.1013.(CA)3,08 .(S/L)-038.(W)1,23 e-9001,2/T].t

Keywords: Chevreul's salt, copper, ammonium chloride, ammonium sulphide, dissolution kinetics

Procedia PDF Downloads 304
1001 Development of Noninvasive Method to Analyze Dynamic Changes of Matrix Stiffness and Elasticity Characteristics

Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Dobdin, Anatoly Skripal, Andrey Usanov, Dmitry Usanov

Abstract:

One of the most important unsolved problems in modern medicine is the increase of chronic diseases that lead to organ dysfunction or even complete loss of function. Current methods of treatment do not result in decreased mortality and disability statistics. Currently, the best treatment for many patients is still transplantation of organs and/or tissues. Therefore, finding a way of correct artificial matrix biofabrication in case of limited number of natural organs for transplantation is a critical task. One important problem that needs to be solved is development of a nondestructive and noninvasive method to analyze dynamic changes of mechanical characteristics of a matrix with minimal side effects on the growing cells. This research was focused on investigating the properties of matrix as a marker of graft condition. In this study, the collagen gel with human primary dermal fibroblasts in suspension (60, 120, 240*103 cells/mL) and collagen gel with cell spheroids were used as model objects. The stiffness and elasticity characteristics were evaluated by a semiconductor laser autodyne. The time and cell concentration dependency of the stiffness and elasticity were investigated. It was shown that these properties changed in a non-linear manner with respect to cell concentration. The maximum matrix stiffness was observed in the collagen gel with the cell concentration of 120*103 cells/mL. This study proved the opportunity to use the mechanical properties of matrix as a marker of graft condition, which can be measured by noninvasive semiconductor laser autodyne technique.

Keywords: graft, matrix, noninvasive method, regenerative medicine, semiconductor laser autodyne

Procedia PDF Downloads 342
1000 Green Supply Chain Network Optimization with Internet of Things

Authors: Sema Kayapinar, Ismail Karaoglan, Turan Paksoy, Hadi Gokcen

Abstract:

Green Supply Chain Management is gaining growing interest among researchers and supply chain management. The concept of Green Supply Chain Management is to integrate environmental thinking into the Supply Chain Management. It is the systematic concept emphasis on environmental problems such as reduction of greenhouse gas emissions, energy efficiency, recycling end of life products, generation of solid and hazardous waste. This study is to present a green supply chain network model integrated Internet of Things applications. Internet of Things provides to get precise and accurate information of end-of-life product with sensors and systems devices. The forward direction consists of suppliers, plants, distributions centres and sales and collect centres while, the reverse flow includes the sales and collects centres, disassembled centre, recycling and disposal centre. The sales and collection centre sells the new products are transhipped from factory via distribution centre and also receive the end-of life product according their value level. We describe green logistics activities by presenting specific examples including “recycling of the returned products and “reduction of CO2 gas emissions”. The different transportation choices are illustrated between echelons according to their CO2 gas emissions. This problem is formulated as a mixed integer linear programming model to solve the green supply chain problems which are emerged from the environmental awareness and responsibilities. This model is solved by using Gams package program. Numerical examples are suggested to illustrate the efficiency of the proposed model.

Keywords: green supply chain optimization, internet of things, greenhouse gas emission, recycling

Procedia PDF Downloads 325
999 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries

Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman

Abstract:

There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.

Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems

Procedia PDF Downloads 144
998 An Explanatory Study Approach Using Artificial Intelligence to Forecast Solar Energy Outcome

Authors: Agada N. Ihuoma, Nagata Yasunori

Abstract:

Artificial intelligence (AI) techniques play a crucial role in predicting the expected energy outcome and its performance, analysis, modeling, and control of renewable energy. Renewable energy is becoming more popular for economic and environmental reasons. In the face of global energy consumption and increased depletion of most fossil fuels, the world is faced with the challenges of meeting the ever-increasing energy demands. Therefore, incorporating artificial intelligence to predict solar radiation outcomes from the intermittent sunlight is crucial to enable a balance between supply and demand of energy on loads, predict the performance and outcome of solar energy, enhance production planning and energy management, and ensure proper sizing of parameters when generating clean energy. However, one of the major problems of forecasting is the algorithms used to control, model, and predict performances of the energy systems, which are complicated and involves large computer power, differential equations, and time series. Also, having unreliable data (poor quality) for solar radiation over a geographical location as well as insufficient long series can be a bottleneck to actualization. To overcome these problems, this study employs the anaconda Navigator (Jupyter Notebook) for machine learning which can combine larger amounts of data with fast, iterative processing and intelligent algorithms allowing the software to learn automatically from patterns or features to predict the performance and outcome of Solar Energy which in turns enables the balance of supply and demand on loads as well as enhance production planning and energy management.

Keywords: artificial Intelligence, backward elimination, linear regression, solar energy

Procedia PDF Downloads 154
997 Assessment and Evaluation Resilience of Urban Neighborhoods in Coping with Natural Disasters in in the Metropolis of Tabriz (Case Study: Region 6 of Tabriz)

Authors: Ali panahi-Kosar Khosravi

Abstract:

Earthquake resilience is one of the most important theoretical and practical concepts in crisis management. Over the past few decades, the rapid growth of urban areas and developing lower urban areas (especially in developing countries) have made them more vulnerable to human and natural crises. Therefore, the resilience of urban communities, especially low-income and unhealthy neighborhoods, is of particular importance. The present study seeks to assess and evaluate the resilience of neighborhoods in the center of district 6 of Tabriz in terms of awareness, knowledge and personal skills, social and psychological capital, managerial-institutional, and the ability to return to appropriate and sustainable conditions. The research method in this research is descriptive-analytical. The authors used library and survey methods to collect information and a questionnaire to assess resilience. The statistical population of this study is the total households living in the four neighborhoods of Shanb Ghazan, Khatib, Gharamalek, and Abuzar alley. Three hundred eighty-four families from four neighborhoods were selected based on the Cochran formula using a simple random sampling method. A one-sample t-test, simple linear regression, and structural equations were used to test the research hypotheses. Findings showed that only two social and psychological awareness and capital indicators in district 6 of Tabriz had a favorable and approved status. Therefore, considering the multidimensional concept of resilience, district 6 of Tabriz is in an unfavorable resilience situation. Also, the findings based on the analysis of variance indicated no significant difference between the neighborhoods of district 6 in terms of resilience, and most neighborhoods are in an unfavorable situation.

Keywords: resilience, statistical analysis, earthquake, district 6 of tabriz

Procedia PDF Downloads 73
996 Empowering Youth Through Pesh Poultry: A Transformative Approach to Addressing Unemployment and Fostering Sustainable Livelihoods in Busia District, Uganda

Authors: Bisemiire Anthony,

Abstract:

PESH Poultry is a business project proposed specifically to solve unemployment and income-related problems affecting the youths in the Busia district. The project is intended to transform the life of the youth in terms of economic, social and behavioral, as well as the domestic well-being of the community at large. PESH Poultry is a start-up poultry farm that will be engaged in the keeping of poultry birds, broilers and layers for the production of quality and affordable poultry meat and eggs respectively and other poultry derivatives targeting consumers in eastern Uganda, for example, hotels, restaurants, households and bakeries. We intend to use a semi-intensive system of farming, where water and some food are provided in a separate nighttime shelter for the birds; our location will be in Lumino, Busia district. The poultry project will be established and owned by Bisemiire Anthony, Nandera Patience, Naula Justine, Bwire Benjamin and other investors. The farm will be managed and directed by Nandera Patience, who has five years of work experience and business administration knowledge. We will sell poultry products, including poultry eggs, chicken meat, feathers and poultry manure. We also offer consultancy services for poultry farming. Our eggs and chicken meat are hygienic, rich in protein and high quality. We produce processes and packages to meet the standard organization of Uganda and international standards. The business project shall comprise five (5) workers on the key management team who will share various roles and responsibilities in the identified business functions such as marketing, finance and other related poultry farming activities. PESH Poultry seeks 30 million Ugandan shillings in long-term financing to cover start-up costs, equipment, building expenses and working capital. Funding for the launch of the business will be provided primarily by equity from the investors. The business will reach positive cash flow in its first year of operation, allowing for the expected repayment of its loan obligations. Revenue will top UGX 11,750,000, and net income will reach about UGX115 950,000 in the 1st year of operation. The payback period for our project is 2 years and 3 months. The farm plans on starting with 1000 layer birds and 1000 broiler birds, 20 workers in the first year of operation.

Keywords: chicken, pullets, turkey, ducks

Procedia PDF Downloads 86
995 Major Sucking Pests of Rose and Their Seasonal Abundance in Bangladesh

Authors: Md Ruhul Amin

Abstract:

This study was conducted in the experimental field of the Department of Entomology, Bangabandhu Sheikh Mujibur Rahman Agricultural University, Gazipur, Bangladesh during November 2017 to May 2018 with a view to understanding the seasonal abundance of the major sucking pests namely thrips, aphid and red spider mite on rose. The findings showed that the thrips started to build up their population from the middle of January with abundance 1.0 leaf⁻¹, increased continuously, reached to the peak level (2.6 leaf⁻¹) in the middle of February and then declined. Aphid started to build up their population from the second week of November with abundance 6.0 leaf⁻¹, increased continuously, reached to the peak level (8.4 leaf⁻¹) in the last week of December and then declined. Mite started to build up their population from the first week of December with abundance 0.8 leaf⁻¹, increased continuously, reached to the peak level (8.2 leaf⁻¹) in the second week of March and then declined. Thrips and mite prevailed until the last week of April, and aphid showed their abundance till last week of May. The daily mean temperature, relative humidity, and rainfall had an insignificant negative correlation with thrips and significant negative correlation with aphid abundance. The daily mean temperature had significant positive, relative humidity had an insignificant positive, and rainfall had an insignificant negative correlation with mite abundance. The multiple linear regression analysis showed that the weather parameters together contributed 38.1, 41.0 and 8.9% abundance on thrips, aphid and mite on rose, respectively and the equations were insignificant.

Keywords: aphid, mite, thrips, weather factors

Procedia PDF Downloads 159
994 Bismuth Telluride Topological Insulator: Physical Vapor Transport vs Molecular Beam Epitaxy

Authors: Omar Concepcion, Osvaldo De Melo, Arturo Escobosa

Abstract:

Topological insulator (TI) materials are insulating in the bulk and conducting in the surface. The unique electronic properties associated with these surface states make them strong candidates for exploring innovative quantum phenomena and as practical applications for quantum computing, spintronic and nanodevices. Many materials, including Bi₂Te₃, have been proposed as TIs and, in some cases, it has been demonstrated experimentally by angle-resolved photoemission spectroscopy (ARPES), scanning tunneling spectroscopy (STM) and/or magnetotransport measurements. A clean surface is necessary in order to make any of this measurements. Several techniques have been used to produce films and different kinds of nanostructures. Growth and characterization in situ is usually the best option although cleaving the films can be an alternative to have a suitable surface. In the present work, we report a comparison of Bi₂Te₃ grown by physical vapor transport (PVT) and molecular beam epitaxy (MBE). The samples were characterized by X-ray diffraction (XRD), Scanning electron microscopy (SEM), Atomic force microscopy (AFM), X-ray photoelectron spectroscopy (XPS) and ARPES. The Bi₂Te₃ samples grown by PVT, were cleaved in the ultra-high vacuum in order to obtain a surface free of contaminants. In both cases, the XRD shows a c-axis orientation and the pole diagrams proved the epitaxial relationship between film and substrate. The ARPES image shows the linear dispersion characteristic of the surface states of the TI materials. The samples grown by PVT, a relatively simple and cost-effective technique shows the same high quality and TI properties than the grown by MBE.

Keywords: Bismuth telluride, molecular beam epitaxy, physical vapor transport, topological insulator

Procedia PDF Downloads 189
993 Alterations of Molecular Characteristics of Polyethylene under the Influence of External Effects

Authors: Vigen Barkhudaryan

Abstract:

The influence of external effects (γ-, UV–radiations, high temperature) in presence of air oxygen on structural transformations of low-density polyethylene (LDPE) have been investigated dependent on the polymers’ thickness, the intensity and the dose of external actions. The methods of viscosimetry, light scattering, turbidimetry and gelation measuring were used for this purpose. The comparison of influence of external effects on LDPE shows, that the destruction and cross-linking processes of macromolecules proceed simultaneously with all kinds of external effects. A remarkable growth of average molecular mass of LDPE along with the irradiation doses and heat treatment exposure growth was established. It was linear for the mass average molecular mass and at the initial doses is mainly the result of the increase of the macromolecular branching. As a result, the macromolecular hydrodynamic volumes have been changed, and therefore the dependence of viscosity average molecular mass on the doses was going through the minimum at initial doses. A significant change of molecular mass, sizes and shape of macromolecules of LDPE occurs under the influence of external effects. The influence is limited only by diffusion of oxygen during -irradiation and heat treatment. At UV–irradiation the influence is limited both by diffusion of oxygen and penetration of radiation. Consequently, the molecular transformations are deeper and evident in case of -irradiation, as soon as the polymer is transformed in a whole volume. It was also established, that the mechanism of molecular transformations in polymers from the surface layer distinctly differs from those of the sample deeper layer. A comparison of the results of these investigations allows us to conclude, that the mechanisms of influence of investigated external effects on polyethylene are similar.

Keywords: cross-linking, destruction, high temperature, LDPE, γ-radiations, UV-radiations

Procedia PDF Downloads 311
992 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement

Authors: Hu Zhenxing, Gao Jianxin

Abstract:

Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.

Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D

Procedia PDF Downloads 492
991 Performance Evaluation of a Fuel Cell Membrane Electrode Assembly Prepared from a Reinforced Proton Exchange Membrane

Authors: Yingjeng James Li, Yun Jyun Ou, Chih Chi Hsu, Chiao-Chih Hu

Abstract:

A fuel cell is a device that produces electric power by reacting fuel and oxidant electrochemically. There is no pollution produced from a fuel cell if hydrogen is employed as the fuel. Therefore, a fuel cell is considered as a zero emission device and is a source of green power. A membrane electrode assembly (MEA) is the key component of a fuel cell. It is, therefore, beneficial to develop MEAs with high performance. In this study, an MEA for proton exchange membrane fuel cell (PEMFC) was prepared from a 15-micron thick reinforced PEM. The active area of such MEA is 25 cm2. Carbon supported platinum (Pt/C) was employed as the catalyst for both anode and cathode. The platinum loading is 0.6 mg/cm2 based on the sum of anode and cathode. Commercially available carbon papers coated with a micro porous layer (MPL) serve as gas diffusion layers (GDLs). The original thickness of the GDL is 250 μm. It was compressed down to 163 μm when assembled into the single cell test fixture. Polarization curves were taken by using eight different test conditions. At our standard test condition (cell: 70 °C; anode: pure hydrogen, 100%RH, 1.2 stoic, ambient pressure; cathode: air, 100%RH, 3.0 stoic, ambient pressure), the cell current density is 1250 mA/cm2 at 0.6 V, and 2400 mA/cm2 at 0.4 V. At self-humidified condition and cell temperature of 55 °C, the cell current density is 1050 mA/cm2 at 0.6 V, and 2250 mA/cm2 at 0.4 V. Hydrogen crossover rate of the MEA is 0.0108 mL/min*cm2 according to linear sweep voltammetry experiments. According to the MEA’s Pt loading and the cyclic voltammetry experiments, the Pt electrochemical surface area is 60 m2/g. The ohmic part of the impedance spectroscopy results shows that the membrane resistance is about 60 mΩ*cm2 when the MEA is operated at 0.6 V.

Keywords: fuel cell, membrane electrode assembly, proton exchange membrane, reinforced

Procedia PDF Downloads 289
990 Location Choice: The Effects of Network Configuration upon the Distribution of Economic Activities in the Chinese City of Nanning

Authors: Chuan Yang, Jing Bie, Zhong Wang, Panagiotis Psimoulis

Abstract:

Contemporary studies investigating the association between the spatial configuration of the urban network and economic activities at the street level were mostly conducted within space syntax conceptual framework. These findings supported the theory of 'movement economy' and demonstrated the impact of street configuration on the distribution of pedestrian movement and land-use shaping, especially retail activities. However, the effects varied between different urban contexts. In this paper, the relationship between economic activity distribution and the urban configurational characters was examined at the segment level. In the study area, three kinds of neighbourhood types, urban, suburban, and rural neighbourhood, were included. And among all neighbourhoods, three kinds of urban network form, 'tree-like', grid, and organic pattern, were recognised. To investigate the nested effects of urban configuration measured by space syntax approach and urban context, multilevel zero-inflated negative binomial (ZINB) regression models were constructed. Additionally, considering the spatial autocorrelation, spatial lag was also concluded in the model as an independent variable. The random effect ZINB model shows superiority over the ZINB model or multilevel linear (ML) model in the explanation of economic activities pattern shaping over the urban environment. And after adjusting for the neighbourhood type and network form effects, connectivity and syntax centrality significantly affect economic activities clustering. The comparison between accumulative and new established economic activities illustrated the different preferences for economic activity location choice.

Keywords: space syntax, economic activities, multilevel model, Chinese city

Procedia PDF Downloads 122
989 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI

Authors: Ananya Ananya, Karthik Rao

Abstract:

Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.

Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net

Procedia PDF Downloads 254
988 Electrical Conductivity as Pedotransfer Function in the Determination of Sodium Adsorption Ratio in Soil System in Managing Micro Level Farming Practices in India: An Effective Low Cost Technology

Authors: Usha Loganathan, Haresh Pandya

Abstract:

Analysis and correlation of soil properties represent an important outset for precision agriculture and is currently promoted and implemented in the developed world. Establishing relationships among indices of soil salinity has always been a challenging task in salt affected soils necessitating unique approaches for their reclamation and management to sustain long term productivity of Soil. Soil salinity indices like Electrical Conductivity (EC) and Sodium Adsorption Ratio (SAR) are normally used to characterize soils as either sodic or saline sodic. Currently, Determination of Soil sodium adsorption ratio is a more accepted and reliable measure of soil salinity. However, it involves arduous and protracted laboratory investigations which demand evolving new and economical methods to determine SAR based on simple soil salinity index. A linear regression model to predict soil SAR from soil electrical conductivity has been developed and presented in this paper as per which, soil SAR could very well be worked out as a pedotransfer function of soil EC. The present study was carried out in Orathupalayam (11.09-11.11 N latitude and 74.54-77.59 E longitude) in the vicinity of Orathupalayam Reservoir of Noyyal River Basin, India, over a period of 3 consecutive years from September 2013 through February 2016 in different locations chosen randomly through different seasons. The research findings are discussed in the light of micro level farming practices in India and recommend determination of SAR as a low cost technology aiding in the effective management of salt affected agricultural land.

Keywords: electrical conductivity, orathupalayam, pedotranfer function, sodium adsorption ratio

Procedia PDF Downloads 251
987 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 103
986 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms

Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager

Abstract:

This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.

Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties

Procedia PDF Downloads 52
985 Effect of Islamic Finance on Jobs Generation in Punjab, Pakistan

Authors: B. Ashraf, A. M. Malik

Abstract:

The study was accomplished at the Department of Economics and Agriculture Economics, Pir Mahar Ali Shah ARID Agriculture University, Punjab, Pakistan during 2013-16 with a purpose to discover the effect of Islamic finance/banking on employment in Punjab, Pakistan. Islamic banking system is sub-component of conventional banking system in various countries of the world; however, in Pakistan, it has been established as a separate Islamic banking system. The Islamic banking operates under the doctrine of Shariah. It is claimed that the referred banking is free of interest (Riba) and addresses the philosophy and basic values of Islam in finance that reduces the factors of uncertainty, risk and others speculative activities. Two Islamic bank’s; Meezan Bank Limited (Pakistan) and Al-Baraka Bank Limited (Pakistan) from North Punjab (Bahawalnagar) and central Punjab (Lahore) west Punjab (Gujrat), Pakistan were randomly selected for the conduct of research. A total of 206 samples were collected from the define areas and banks through questionnaire. The data was analyzed by using the Statistical Package for Social Sciences (SPSS) version 21.0. Multiple linear regressions were applied to prove the hypothesis. The results revealed that the assets formation had significant positive; whereas, the technology, length of business (experience) and bossiness size had significant negative impact with employment generation in Islamic finance/banking in Punjab, Pakistan. This concludes that the employment opportunities may be created in the country by extending the finance to business/firms to start new business and increase the Public awareness by the Islamic banks through intensive publicity. However; Islamic financial institutions may be encouraged by Government as it enhances the employment in the country.

Keywords: assets formation, borrowers, employment generation, Islamic banks, Islamic finance

Procedia PDF Downloads 319
984 Numerical Investigation of the Integration of a Micro-Combustor with a Free Piston Stirling Engine in an Energy Recovery System

Authors: Ayodeji Sowale, Athanasios Kolios, Beatriz Fidalgo, Tosin Somorin, Aikaterini Anastasopoulou, Alison Parker, Leon Williams, Ewan McAdam, Sean Tyrrel

Abstract:

Recently, energy recovery systems are thriving and raising attention in the power generation sector, due to the request for cleaner forms of energy that are friendly and safe for the environment. This has created an avenue for cogeneration, where Combined Heat and Power (CHP) technologies have been recognised for their feasibility, and use in homes and small-scale businesses. The efficiency of combustors and the advantages of the free piston Stirling engines over other conventional engines in terms of output power and efficiency, have been observed and considered. This study presents the numerical analysis of a micro-combustor with a free piston Stirling engine in an integrated model of a Nano Membrane Toilet (NMT) unit. The NMT unit will use the micro-combustor to produce waste heat of high energy content from the combustion of human waste and the heat generated will power the free piston Stirling engine which will be connected to a linear alternator for electricity production. The thermodynamic influence of the combustor on the free piston Stirling engine was observed, based on the heat transfer from the flue gas to working gas of the free piston Stirling engine. The results showed that with an input of 25 MJ/kg of faecal matter, and flue gas temperature of 773 K from the micro-combustor, the free piston Stirling engine generates a daily output power of 428 W, at thermal efficiency of 10.7% with engine speed of 1800 rpm. An experimental investigation into the integration of the micro-combustor and free piston Stirling engine with the NMT unit is currently underway.

Keywords: free piston stirling engine, micro-combustor, nano membrane toilet, thermodynamics

Procedia PDF Downloads 254
983 Coupled Hydro-Geomechanical Modeling of Oil Reservoir Considering Non-Newtonian Fluid through a Fracture

Authors: Juan Huang, Hugo Ninanya

Abstract:

Oil has been used as a source of energy and supply to make materials, such as asphalt or rubber for many years. This is the reason why new technologies have been implemented through time. However, research still needs to continue increasing due to new challenges engineers face every day, just like unconventional reservoirs. Various numerical methodologies have been applied in petroleum engineering as tools in order to optimize the production of reservoirs before drilling a wellbore, although not all of these have the same efficiency when talking about studying fracture propagation. Analytical methods like those based on linear elastic fractures mechanics fail to give a reasonable prediction when simulating fracture propagation in ductile materials whereas numerical methods based on the cohesive zone method (CZM) allow to represent the elastoplastic behavior in a reservoir based on a constitutive model; therefore, predictions in terms of displacements and pressure will be more reliable. In this work, a hydro-geomechanical coupled model of horizontal wells in fractured rock was developed using ABAQUS; both extended element method and cohesive elements were used to represent predefined fractures in a model (2-D). A power law for representing the rheological behavior of fluid (shear-thinning, power index <1) through fractures and leak-off rate permeating to the matrix was considered. Results have been showed in terms of aperture and length of the fracture, pressure within fracture and fluid loss. It was showed a high infiltration rate to the matrix as power index decreases. A sensitivity analysis is conclusively performed to identify the most influential factor of fluid loss.

Keywords: fracture, hydro-geomechanical model, non-Newtonian fluid, numerical analysis, sensitivity analysis

Procedia PDF Downloads 201
982 Effects of Polydispersity on the Glass Transition Dynamics of Aqueous Suspensions of Soft Spherical Colloidal Particles

Authors: Sanjay K. Behera, Debasish Saha, Paramesh Gadige, Ranjini Bandyopadhyay

Abstract:

The zero shear viscosity (η₀) of a suspension of hard sphere colloids characterized by a significant polydispersity (≈10%) increases with increase in volume fraction (ϕ) and shows a dramatic increase at ϕ=ϕg with the system entering a colloidal glassy state. Fragility which is the measure of the rapidity of approach of these suspensions towards the glassy state is sensitive to its size polydispersity and stiffness of the particles. Soft poly(N-isopropylacrylamide) (PNIPAM) particles deform in the presence of neighboring particles at volume fraction above the random close packing volume fraction of undeformed monodisperse spheres. Softness, therefore, enhances the packing efficiency of these particles. In this study PNIPAM particles of a nearly constant swelling ratio and with polydispersities varying over a wide range (7.4%-48.9%) are synthesized to study the effects of polydispersity on the dynamics of suspensions of soft PNIPAM colloidal particles. The size and polydispersity of these particles are characterized using dynamic light scattering (DLS) and scanning electron microscopy (SEM). As these particles are deformable, their packing in aqueous suspensions is quantified in terms of effective volume fraction (ϕeff). The zero shear viscosity (η₀) data of these colloidal suspensions, estimated from rheometric experiments as a function of the effective volume fraction ϕeff of the suspensions, increases with increase in ϕeff and shows a dramatic increase at ϕeff = ϕ₀. The data for η₀ as a function of ϕeff fits well to the Vogel-Fulcher-Tammann equation. It is observed that increasing polydispersity results in increasingly fragile supercooled liquid-like behavior, with the parameter ϕ₀, extracted from the fits to the VFT equation shifting towards higher ϕeff. The observed increase in fragility is attributed to the prevalence of dynamical heterogeneities (DHs) in these polydisperse suspensions, while the simultaneous shift in ϕ₀ is ascribed to the decoupling of the dynamics of the smallest and largest particles. Finally, it is observed that the intrinsic nonlinearity of these suspensions, estimated at the third harmonic near ϕ₀ in Fourier transform oscillatory rheological experiments, increases with increase in polydispersity. These results are in agreement with theoretical predictions and simulation results for polydisperse hard sphere colloidal glasses and clearly demonstrate that jammed suspensions of polydisperse colloidal particles can be effectively fluidized with increasing polydispersity. Suspensions of these particles are therefore excellent candidates for detailed experimental studies of the effects of polydispersity on the dynamics of glass formation.

Keywords: dynamical heterogeneity, effective volume fraction, fragility, intrinsic nonlinearity

Procedia PDF Downloads 161
981 Biosensor for Determination of Immunoglobulin A, E, G and M

Authors: Umut Kokbas, Mustafa Nisari

Abstract:

Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.

Keywords: biosensor, immunosensor, immunoglobulin, infection

Procedia PDF Downloads 98
980 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method

Authors: Luh Eka Suryani, Purhadi

Abstract:

Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.

Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion

Procedia PDF Downloads 156
979 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 73
978 How Virtualization, Decentralization, and Network-Building Change the Manufacturing Landscape: An Industry 4.0 Perspective

Authors: Malte Brettel, Niklas Friederichsen, Michael Keller, Marius Rosenberg

Abstract:

The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the process- and supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-Physical-Systems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decision-makers to assess their need for transformation towards Industry 4.0 practices.

Keywords: Industry 4.0., mass customization, production networks, virtual process-chain

Procedia PDF Downloads 273
977 The Significance of Urban Space in Death Trilogy of Alejandro González Iñárritu

Authors: Marta Kaprzyk

Abstract:

The cinema of Alejandro González Iñárritu hasn’t been subjected to a lot of detailed analysis yet, what makes it an exceptionally interesting research material. The purpose of this presentation is to discuss the significance of urban space in three films of this Mexican director, that forms Death Trilogy: ‘Amores Perros’ (2000), ‘21 Grams’ (2003) and ‘Babel’ (2006). The fact that in the aforementioned movies the urban space itself becomes an additional protagonist with its own identity, psychology and the ability to transform and affect other characters, in itself warrants for independent research and analysis. Independently, such mode of presenting urban space has another function; it enables the director to complement the rest of characters. The basis for methodology of this description of cinematographic space is to treat its visual layer as a point of departure for a detailed analysis. At the same time, the analysis itself will be supported by recognised academic theories concerning special issues, which are transformed here into essential tools necessary to describe the world (mise-en-scène) created by González Iñárritu. In ‘Amores perros’ the Mexico City serves as a scenery – a place full of contradictions- in the movie depicted as a modern conglomerate and an urban jungle, as well as a labyrinth of poverty and violence. In this work stylistic tropes can be found in an intertextual dialogue of the director with photographies of Nan Goldin and Mary Ellen Mark. The story recounted in ‘21 Grams’, the most tragic piece in the trilogy, is characterised by almost hyperrealistic sadism. It takes place in Memphis, which on the screen turns into an impersonal formation full of heterotopias described by Michel Foucault and non-places, as defined by Marc Augé in his essay. By contrast, the main urban space in ‘Babel’ is Tokio, which seems to perfectly correspond with the image of places discussed by Juhani Pallasmaa in his works concerning the reception of the architecture by ‘pathological senses’ in the modern (or, even more adequately, postmodern) world. It’s portrayed as a city full of buildings that look so surreal, that they seem to be completely unsuitable for the humans to move between them. Ultimately, the aim of this paper is to demonstrate the coherence of the manner in which González Iñárritu designs urban spaces in his Death Trilogy. In particular, the author attempts to examine the imperative role of the cities that form three specific microcosms in which the protagonists of the Mexican director live their overwhelming tragedies.

Keywords: cinematographic space, Death Trilogy, film Studies, González Iñárritu Alejandro, urban space

Procedia PDF Downloads 326
976 Exploring 1,2,4-Triazine-3(2H)-One Derivatives as Anticancer Agents for Breast Cancer: A QSAR, Molecular Docking, ADMET, and Molecular Dynamics

Authors: Said Belaaouad

Abstract:

This study aimed to explore the quantitative structure-activity relationship (QSAR) of 1,2,4-Triazine-3(2H)-one derivative as a potential anticancer agent against breast cancer. The electronic descriptors were obtained using the Density Functional Theory (DFT) method, and a multiple linear regression techniques was employed to construct the QSAR model. The model exhibited favorable statistical parameters, including R2=0.849, R2adj=0.656, MSE=0.056, R2test=0.710, and Q2cv=0.542, indicating its reliability. Among the descriptors analyzed, absolute electronegativity (χ), total energy (TE), number of hydrogen bond donors (NHD), water solubility (LogS), and shape coefficient (I) were identified as influential factors. Furthermore, leveraging the validated QSAR model, new derivatives of 1,2,4-Triazine-3(2H)-one were designed, and their activity and pharmacokinetic properties were estimated. Subsequently, molecular docking (MD) and molecular dynamics (MD) simulations were employed to assess the binding affinity of the designed molecules. The Tubulin colchicine binding site, which plays a crucial role in cancer treatment, was chosen as the target protein. Through the simulation trajectory spanning 100 ns, the binding affinity was calculated using the MMPBSA script. As a result, fourteen novel Tubulin-colchicine inhibitors with promising pharmacokinetic characteristics were identified. Overall, this study provides valuable insights into the QSAR of 1,2,4-Triazine-3(2H)-one derivative as potential anticancer agent, along with the design of new compounds and their assessment through molecular docking and dynamics simulations targeting the Tubulin-colchicine binding site.

Keywords: QSAR, molecular docking, ADMET, 1, 2, 4-triazin-3(2H)-ones, breast cancer, anticancer, molecular dynamic simulations, MMPBSA calculation

Procedia PDF Downloads 88
975 Clay Hydrogel Nanocomposite for Controlled Small Molecule Release

Authors: Xiaolin Li, Terence Turney, John Forsythe, Bryce Feltis, Paul Wright, Vinh Truong, Will Gates

Abstract:

Clay-hydrogel nanocomposites have attracted great attention recently, mainly because of their enhanced mechanical properties and ease of fabrication. Moreover, the unique platelet structure of clay nanoparticles enables the incorporation of bioactive molecules, such as proteins or drugs, through ion exchange, adsorption or intercalation. This study seeks to improve the mechanical and rheological properties of a novel hydrogel system, copolymerized from a tetrapodal polyethylene glycol (PEG) thiol and a linear, triblock PEG-PPG-PEG (PPG: polypropylene glycol) α,ω-bispropynoate polymer, with the simultaneous incorporation of various amounts of Na-saturated, montmorillonite clay (MMT) platelets (av. lateral dimension = 200 nm), to form a bioactive three-dimensional network. Although the parent hydrogel has controlled swelling ability and its PEG groups have good affinity for the clay platelets, it suffers from poor mechanical stability and is currently unsuitable for potential applications. Nanocomposite hydrogels containing 4wt% MMT showed a twelve-fold enhancement in compressive strength, reaching 0.75MPa, and also a three-fold acceleration in gelation time, when compared with the parent hydrogel. Interestingly, clay nanoplatelet incorporation into the hydrogel slowed down the rate of its dehydration in air. Preliminary results showed that protein binding by the MMT varied with the nature of the protein, as horseradish peroxidase (HRP) was more strongly bound than bovine serum albumin. The HRP was no longer active when bound, presumably as a result of extensive structural refolding. Further work is being undertaken to assess protein binding behaviour within the nanocomposite hydrogel for potential diabetic wound healing applications.

Keywords: hydrogel, nanocomposite, small molecule, wound healing

Procedia PDF Downloads 261