Search results for: generate
444 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data
Authors: Minjuan Sun
Abstract:
Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.Keywords: credit score, digital footprint, Fintech, machine learning
Procedia PDF Downloads 162443 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada
Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman
Abstract:
Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.Keywords: HAND, DTM, rapid floodplain, simplified conceptual models
Procedia PDF Downloads 151442 Sustainable Production of Algae through Nutrient Recovery in the Biofuel Conversion Process
Authors: Bagnoud-Velásquez Mariluz, Damergi Eya, Grandjean Dominique, Frédéric Vogel, Ludwig Christian
Abstract:
The sustainability of algae to biofuel processes is seriously affected by the energy intensive production of fertilizers. Large amounts of nitrogen and phosphorus are required for a large-scale production resulting in many cases in a negative impact of the limited mineral resources. In order to meet the algal bioenergy opportunity it appears crucial the promotion of processes applying a nutrient recovery and/or making use of renewable sources including waste. Hydrothermal (HT) conversion is a promising and suitable technology for microalgae to generate biofuels. Besides the fact that water is used as a “green” reactant and solvent and that no biomass drying is required, the technology offers a great potential for nutrient recycling. This study evaluated the possibility to treat the water HT effluent by the growth of microalgae while producing renewable algal biomass. As already demonstrated in previous works by the authors, the HT aqueous product besides having N, P and other important nutrients, presents a small fraction of organic compounds rarely studied. Therefore, extracted heteroaromatic compounds in the HT effluent were the target of the present research; they were profiled using GC-MS and LC-MS-MS. The results indicate the presence of cyclic amides, piperazinediones, amines and their derivatives. The most prominent nitrogenous organic compounds (NOC’s) in the extracts were carefully examined by their effect on microalgae, namely 2-pyrrolidinone and β-phenylethylamine (β-PEA). These two substances were prepared at three different concentrations (10, 50 and 150 ppm). This toxicity bioassay used three different microalgae strains: Phaeodactylum tricornutum, Chlorella sorokiniana and Scenedesmus vacuolatus. The confirmed IC50 was for all cases ca. 75ppm. Experimental conditions were set up for the growth of microalgae in the aqueous phase by adjusting the nitrogen concentration (the key nutrient for algae) to fit that one established for a known commercial medium. The values of specific NOC’s were lowered at concentrations of 8.5 mg/L 2-pyrrolidinone; 1mg/L δ-valerolactam and 0.5 mg/L β-PEA. The growth with the diluted HT solution was kept constant with no inhibition evidence. An additional ongoing test is addressing the possibility to apply an integrated water cleanup step making use of the existent hydrothermal catalytic facility.Keywords: hydrothermal process, microalgae, nitrogenous organic compounds, nutrient recovery, renewable biomass
Procedia PDF Downloads 410441 The importance of Clinical Pharmacy and Computer Aided Drug Design
Authors: Peter Edwar Mortada Nasif
Abstract:
The use of CAD (Computer Aided Design) technology is ubiquitous in the architecture, engineering and construction (AEC) industry. This has led to its inclusion in the curriculum of architecture schools in Nigeria as an important part of the training module. This article examines the ethical issues involved in implementing CAD (Computer Aided Design) content into the architectural education curriculum. Using existing literature, this study begins with the benefits of integrating CAD into architectural education and the responsibilities of different stakeholders in the implementation process. It also examines issues related to the negative use of information technology and the perceived negative impact of CAD use on design creativity. Using a survey method, data from the architecture department of Chukwuemeka Odumegwu Ojukwu Uli University was collected to serve as a case study on how the issues raised were being addressed. The article draws conclusions on what ensures successful ethical implementation. Millions of people around the world suffer from hepatitis C, one of the world's deadliest diseases. Interferon (IFN) is treatment options for patients with hepatitis C, but these treatments have their side effects. Our research focused on developing an oral small molecule drug that targets hepatitis C virus (HCV) proteins and has fewer side effects. Our current study aims to develop a drug based on a small molecule antiviral drug specific for the hepatitis C virus (HCV). Drug development using laboratory experiments is not only expensive, but also time-consuming to conduct these experiments. Instead, in this in silicon study, we used computational techniques to propose a specific antiviral drug for the protein domains of found in the hepatitis C virus. This study used homology modeling and abs initio modeling to generate the 3D structure of the proteins, then identifying pockets in the proteins. Acceptable lagans for pocket drugs have been developed using the de novo drug design method. Pocket geometry is taken into account when designing ligands. Among the various lagans generated, a new specific for each of the HCV protein domains has been proposed.Keywords: drug design, anti-viral drug, in-silicon drug design, hepatitis C virus, computer aided design, CAD education, education improvement, small-size contractor automatic pharmacy, PLC, control system, management system, communication
Procedia PDF Downloads 24440 Land Degradation Vulnerability Modeling: A Study on Selected Micro Watersheds of West Khasi Hills Meghalaya, India
Authors: Amritee Bora, B. S. Mipun
Abstract:
Land degradation is often used to describe the land environmental phenomena that reduce land’s original productivity both qualitatively and quantitatively. The study of land degradation vulnerability primarily deals with “Environmentally Sensitive Areas” (ESA) and the amount of topsoil loss due to erosion. In many studies, it is observed that the assessment of the existing status of land degradation is used to represent the vulnerability. Moreover, it is also noticed that in most studies, the primary emphasis of land degradation vulnerability is to assess its sensitivity to soil erosion only. However, the concept of land degradation vulnerability can have different objectives depending upon the perspective of the study. It shows the extent to which changes in land use land cover can imprint their effect on the land. In other words, it represents the susceptibility of a piece of land to degrade its productive quality permanently or in the long run. It is also important to mention that the vulnerability of land degradation is not a single factor outcome. It is a probability assessment to evaluate the status of land degradation and needs to consider both biophysical and human induce parameters. To avoid the complexity of the previous models in this regard, the present study has emphasized on to generate a simplified model to assess the land degradation vulnerability in terms of its current human population pressure, land use practices, and existing biophysical conditions. It is a “Mixed-Method” termed as the land degradation vulnerability index (LDVi). It was originally inspired by the MEDALUS model (Mediterranean Desertification and Land Use), 1999, and Farazadeh’s 2007 revised version of it. It has followed the guidelines of Space Application Center, Ahmedabad / Indian Space Research Organization for land degradation vulnerability. The model integrates the climatic index (Ci), vegetation index (Vi), erosion index (Ei), land utilization index (Li), population pressure index (Pi), and cover management index (CMi) by giving equal weightage to each parameter. The final result shows that the very high vulnerable zone primarily indicates three (3) prominent circumstances; land under continuous population pressure, high concentration of human settlement, and high amount of topsoil loss due to surface runoff within the study sites. As all the parameters of the model are amalgamated with equal weightage further with the help of regression analysis, the LDVi model also provides a strong grasp of each parameter and how far they are competent to trigger the land degradation process.Keywords: population pressure, land utilization, soil erosion, land degradation vulnerability
Procedia PDF Downloads 167439 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 68438 Community Observatory for Territorial Information Control and Management
Authors: A. Olivi, P. Reyes Cabrera
Abstract:
Ageing and urbanization are two of the main trends that characterize the twenty-first century. Its trending is especially accelerated in the emerging countries of Asia and Latin America. Chile is one of the countries in the Latin American region, where the demographic transition to ageing is becoming increasingly visible. The challenges that the new demographic scenario poses to urban administrators call for searching innovative solutions to maximize the functional and psycho-social benefits derived from the relationship between older people and the environment in which they live. Although mobility is central to people's everyday practices and social relationships, it is not distributed equitably. On the contrary, it can be considered another factor of inequality in our cities. Older people are a particularly sensitive and vulnerable group to mobility. In this context, based on the ageing in place strategy and following the social innovation approach within a spatial context, the "Community Observatory of Territorial Information Control and Management" project aims at the collective search and validation of solutions for the satisfaction of mobility and accessibility specific needs of urban aged people. Specifically, the Observatory intends to: i) promote the direct participation of the aged population in order to generate relevant information on the territorial situation and the satisfaction of the mobility needs of this group; ii) co-create dynamic and efficient mechanisms for the reporting and updating of territorial information; iii) increase the capacity of the local administration to plan and manage solutions to environmental problems at the neighborhood scale. Based on a participatory mapping methodology and on the application of digital technology, the Observatory designed and developed, together with aged people, a crowdsourcing platform for smartphones, called DIMEapp, for reporting environmental problems affecting mobility and accessibility. DIMEapp has been tested at a prototype level in two neighborhoods of the city of Valparaiso. The results achieved in the testing phase have shown high potential in order to i) contribute to establishing coordination mechanisms with the local government and the local community; ii) improve a local governance system that guides and regulates the allocation of goods and services destined to solve those problems.Keywords: accessibility, ageing, city, digital technology, local governance
Procedia PDF Downloads 131437 Detecting Potential Geothermal Sites by Using Well Logging, Geophysical and Remote Sensing Data at Siwa Oasis, Western Desert, Egypt
Authors: Amr S. Fahil, Eman Ghoneim
Abstract:
Egypt made significant efforts during the past few years to discover significant renewable energy sources. Regions in Egypt that have been identified for geothermal potential investigation include the Gulf of Suez and the Western Desert. One of the most promising sites for the development of Egypt's Northern Western Desert is Siwa Oasis. The geological setting of the oasis, a tectonically generated depression situated in the northernmost region of the Western desert, supports the potential for substantial geothermal resources. Field data obtained from 27 deep oil wells along the Western Desert included bottom-hole temperature (BHT) depth to basement measurements, and geological maps; data were utilized in this study. The major lithological units, elevation, surface gradient, lineaments density, and remote sensing multispectral and topographic were mapped together to generate the related physiographic variables. Eleven thematic layers were integrated in a geographic information system (GIS) to create geothermal maps to aid in the detection of significant potential geothermal spots along the Siwa Oasis and its vicinity. The contribution of total magnetic intensity data with reduction to the pole (RTP) to the first investigation of the geothermal potential in Siwa Oasis is applied in this work. The integration of geospatial data with magnetic field measurements showed a clear correlation between areas of high heat flow and magnetic anomalies. Such anomalies can be interpreted as related to the existence of high geothermal energy and dense rock, which also have high magnetic susceptibility. The outcomes indicated that the study area has a geothermal gradient ranging from 18 to 42 °C/km, a heat flow ranging from 24.7 to 111.3 m.W. k−1, a thermal conductivity of 1.3–2.65 W.m−1.k−1 and a measured amplitude temperature maximum of 100.7 °C. The southeastern part of the Siwa Oasis, and some sporadic locations on the eastern section of the oasis were found to have significant geothermal potential; consequently, this location is suitable for future geothermal investigation. The adopted method might be applied to identify significant prospective geothermal energy locations in other regions of Egypt and East Africa.Keywords: magnetic data, SRTM, depth to basement, remote sensing, GIS, geothermal gradient, heat flow, thermal conductivity
Procedia PDF Downloads 117436 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study
Authors: Thomas Arink, Isam Janajreh
Abstract:
The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires
Procedia PDF Downloads 520435 Safeguarding the Cloud: The Crucial Role of Technical Project Managers in Security Management for Cloud Environments
Authors: Samuel Owoade, Zainab Idowu, Idris Ajibade, Abel Uzoka
Abstract:
Cloud computing adoption continues to soar, with 83% of enterprise workloads estimated to be in the cloud by 2022. However, this rapid migration raises security concerns, needing strong security management solutions to safeguard sensitive data and essential applications. This paper investigates the critical role of technical project managers in orchestrating security management initiatives for cloud environments, evaluating their responsibilities, challenges, and best practices for assuring the resilience and integrity of cloud infrastructures. Drawing from a comprehensive review of industry reports and interviews with cloud security experts, this research highlights the multifaceted landscape of security management in cloud environments. Despite the rapid adoption of cloud services, only 25% of organizations have matured their cloud security practices, indicating a pressing need for effective management strategies. This paper proposes a strategy framework adapted to the demands of technical project managers, outlining the important components of effective cloud security management. Notably, 76% of firms identify misconfiguration as a major source of cloud security incidents, underlining the significance of proactive risk assessment and constant monitoring. Furthermore, the study emphasizes the importance of technical project managers in facilitating cross-functional collaboration, bridging the gap between cybersecurity professionals, cloud architects, compliance officers, and IT operations teams. With 68% of firms seeing difficulties integrating security policies into their cloud systems, effective communication and collaboration are critical to success. Case studies from industry leaders illustrate the practical use of security management projects in cloud settings. These examples demonstrate the importance of technical project managers in using their expertise to address obstacles and generate meaningful outcomes, with 92% of firms reporting improved security practices after implementing proactive security management tactics. In conclusion, this research underscores the critical role of technical project managers in safeguarding cloud environments against evolving threats. By embracing their role as guardians of the cloud realm, project managers can mitigate risks, optimize resource utilization, and uphold the trust and integrity of cloud infrastructures in an era of digital transformation.Keywords: cloud security, security management, technical project management, cybersecurity, cloud infrastructure, risk management, compliance
Procedia PDF Downloads 51434 Technological Exploitation and User Experience in Product Innovation: The Case Study of the High-Tech Mask
Authors: Venere Ferraro, Silvia Ferraris
Abstract:
We live in a world pervaded by new advanced technologies that have been changing the way we live and experience the surrounded. Besides, new technologies enable product innovation at different levels. Nevertheless, innovation does not lie just in the technological development and in its hard aspects but also in the meaningful use of it for the final user. In order to generate innovative products, a new perspective is needed: The shift from an instrument-oriented view of the technology towards a broader view that includes aspects like aesthetics, acceptance, comfort, and sociability. In many businesses, the user experience of the product is considered the key battlefield to achieve product innovation. (Holland 2011) The use of new technologies is indeed useless without paying attention to the user experience. This paper presents a workshop activity conducted at Design School of Politecnico di Milano in collaboration with Chiba University and aimed at generating innovative design concepts of high-tech mask. The students were asked to design the user experience of a new mask by exploiting emerging technologies such as wearable sensors and information communication technology (ICT) for a chosen field of application: safety or sport. When it comes to the user experience, the mask is a very challenging design product, because it covers aspects of product interaction and, most important, psychological and cultural aspects related to the impact on the facial expression. Furthermore, since the mask affects the face expression quite a lot, it could be a barrier to hide with, or it could be a mean to enhance user’s communication to others. The main request for the students was to take on a user-centered approach: To go beyond the instrumental aspects of product use and usability and focus on the user experience by shaping the technology in a desirable and meaningful way for the user reasoning on the metaphorical and cultural level of the product. During the one-week workshop students were asked to face the design process through (i) the research phase: an in-deep analysis of the user and field of application (safety or sport) to set design spaces (brief) and user scenario; (ii) the idea generation, (iii) the idea development. This text will shortly go through the meaning of the product innovation, the use and application of wearable technologies and will then focus on the user experience design in contrast with the technology-driven approach in the field of product innovation. Finally authors will describe the workshop activity and the concepts developed by the students stressing the important role of the user experience design in new product development.Keywords: product innovation, user experience, technological exploitation, wearable technologies
Procedia PDF Downloads 346433 Prioritizing Ecosystem Services for South-Central Regions of Chile: An Expert-Based Spatial Multi-Criteria Approach
Authors: Yenisleidy Martinez Martinez, Yannay Casas-Ledon, Jo Dewulf
Abstract:
The ecosystem services (ES) concept has contributed to draw attention to the benefits ecosystems generate for people and how necessary natural resources are for human well-being. The identification and prioritization of the ES constitute the first steps to undertake conservation and valuation initiatives on behalf of people. Additionally, mapping the supply of ES is a powerful tool to support decision making regarding the sustainable management of landscape and natural resources. In this context, the present study aimed to identify, prioritize and map the primary ES in Biobio and Nuble regions using a methodology that combines expert judgment, multi-attribute evaluation methods, and Geographic Information Systems (GIS). Firstly, scores about the capacity of different land use/cover types to supply ES and the importance attributed to each service were obtained from experts and stakeholders via an online survey. Afterward, the ES assessment matrix was constructed, and the weighted linear combination (WLC) method was applied to mapping the overall capacity of supply of provisioning, regulating and maintenance, and cultural services. Finally, prioritized ES for the study area were selected and mapped. The results suggest that native forests, wetlands, and water bodies have the highest supply capacities of ES, while urban and industrial areas and bare areas have a very low supply of services. On the other hand, fourteen out of twenty-nine services were selected by experts and stakeholders as the most relevant for the regions. The spatial distribution of ES has shown that the Andean Range and part of the Coastal Range have the highest ES supply capacity, mostly regulation and maintenance and cultural ES. This performance is related to the presence of native forests, water bodies, and wetlands in those zones. This study provides specific information about the most relevant ES in Biobio and Nuble according to the opinion of local stakeholders and the spatial identification of areas with a high capacity to provide services. These findings could be helpful as a reference by planners and policymakers to develop landscape management strategies oriented to preserve the supply of services in both regions.Keywords: ecosystem services, expert judgment, mapping, multi-criteria decision making, prioritization
Procedia PDF Downloads 126432 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 87431 Delivering on Infrastructure Maintenance for Socio-Economic Growth: Exploration of South African Infrastructure for a Sustained Maintenance Strategy
Authors: Deenadayalan Govender
Abstract:
In South Africa, similar to nations globally, the prevailing tangible link between people and the state is public infrastructure. Services delivered through infrastructure to the people and to the state form a critical enabler for social development in communities and economic development in the country. In this regard, infrastructure, being the backbone to a nation’s prosperity, ideally should be effectively maintained for seamless delivery of services. South African infrastructure is in a state of deterioration, which is leading to infrastructure dysfunction and collapse and is negatively affecting development of the economy. This infrastructure deterioration stems from deficiencies in maintenance practices and strategies. From the birth of South African democracy, government has pursued socio-economic transformation and the delivery of critical basic services to decrease the broadening boundaries of disparity. In this regard, the National Infrastructure Plan borne from strategies encompassed in the National Development Plan is given priority by government in delivering strategic catalytic infrastructure projects. The National Infrastructure Plan is perceived to be the key in unlocking opportunities that generate economic growth, kerb joblessness, alleviate poverty, create new entrepreneurial prospects, and mitigate population expansion and rapid urbanisation. Socio-economic transformation benefits from new infrastructure spend is not being realised as initially anticipated. In this context, South Africa is currently in a state of weakening economic growth, with further amassed levels of joblessness, unremitting poverty and inequality. Due to investor reluctance, solicitation of strategic infrastructure funding is progressively becoming a debilitating challenge in all government institutions. Exacerbating these circumstances further, is substandard functionality of existing infrastructure subsequent to inadequate maintenance practices. This in-depth multi-sectoral study into the state of infrastructure is to understand the principal reasons for infrastructure functionality regression better; furthermore, prioritised investigations into progressive maintenance strategies is focused upon. Resultant recommendations reveal enhanced maintenance strategies, with a vision to capitalize on infrastructure design life, and also give special emphasis to socio-economic development imperatives in the long-term. The research method is principally based on descriptive methods (survey, historical, content analysis, qualitative).Keywords: infrastructure, maintenance, socio-economic, strategies
Procedia PDF Downloads 140430 Value Generation of Construction and Demolition Waste Originated in the Building Rehabilitation to Improve Energy Efficiency; From Waste to Resources
Authors: Mercedes Del Rio Merino, Jaime Santacruz Astorqui, Paola Villoria Saez, Carmen Viñas Arrebola
Abstract:
The lack of treatment of the waste from construction and demolition waste (CDW) is a problem that must be solved immediately. It is estimated that in the world not to use CDW generates an increase in the use of new materials close to 20% of the total value of the materials used. The problem is even greater in case these wastes are considered hazardous because the final deposition of them may also generate significant contamination. Therefore, the possibility of including CDW in the manufacturing of building materials, represents an interesting alternative to ensure their use and to reduce their possible risk. In this context and in the last years, many researches are being carried out in order to analyze the viability of using CDW as a substitute for the traditional raw material of high environmental impact. Even though it is true, much remains to be done, because these works generally characterize materials but not specific applications that allow the agents of the construction to have the guarantees required by the projects. Therefore, it is necessary the involvement of all the actors included in the life cycle of these new construction materials, and also to promote its use for, for example, definition of standards, tax advantages or market intervention is necessary. This paper presents the main findings reached in "Waste to resources (W2R)" project since it began in October 2014. The main goal of the project is to develop new materials, elements and construction systems, manufactured from CDW, to be used in improving the energy efficiency of buildings. Other objectives of the project are: to quantify the CDW generated in the energy rehabilitation works, specifically wastes from the building envelope; and to study the traceability of CDW generated and promote CDW reuse and recycle in order to get close to the life cycle of buildings, generating zero waste and reducing the ecological footprint of the construction sector. This paper determines the most important aspects to consider during the design of new constructive solutions, which improve the energy efficiency of buildings and what materials made with CDW would be the most suitable for that. Also, a survey to select best practices for reducing "close to zero waste" in refurbishment was done. Finally, several pilot rehabilitation works conform the parameters analyzed in the project were selected, in order to apply the results and thus compare the theoretical with reality. Acknowledgements: This research was supported by the Spanish State Secretariat for Research, Development and Innovation of the Ministry of Economy and Competitiveness under "Waste 2 Resources" Project (BIA2013-43061-R).Keywords: building waste, construction and demolition waste, recycling, resources
Procedia PDF Downloads 250429 Production Optimization under Geological Uncertainty Using Distance-Based Clustering
Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe
Abstract:
It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization
Procedia PDF Downloads 144428 Cognitive Translation and Conceptual Wine Tasting Metaphors: A Corpus-Based Research
Authors: Christine Demaecker
Abstract:
Many researchers have underlined the importance of metaphors in specialised language. Their use of specific domains helps us understand the conceptualisations used to communicate new ideas or difficult topics. Within the wide area of specialised discourse, wine tasting is a very specific example because it is almost exclusively metaphoric. Wine tasting metaphors express various conceptualisations. They are not linguistic but rather conceptual, as defined by Lakoff & Johnson. They correspond to the linguistic expression of a mental projection from a well-known or more concrete source domain onto the target domain, which is the taste of wine. But unlike most specialised terminologies, the vocabulary is never clearly defined. When metaphorical terms are listed in dictionaries, their definitions remain vague, unclear, and circular. They cannot be replaced by literal linguistic expressions. This makes it impossible to transfer them into another language with the traditional linguistic translation methods. Qualitative research investigates whether wine tasting metaphors could rather be translated with the cognitive translation process, as well described by Nili Mandelblit (1995). The research is based on a corpus compiled from two high-profile wine guides; the Parker’s Wine Buyer’s Guide and its translation into French and the Guide Hachette des Vins and its translation into English. In this small corpus with a total of 68,826 words, 170 metaphoric expressions have been identified in the original English text and 180 in the original French text. They have been selected with the MIPVU Metaphor Identification Procedure developed at the Vrije Universiteit Amsterdam. The selection demonstrates that both languages use the same set of conceptualisations, which are often combined in wine tasting notes, creating conceptual integrations or blends. The comparison of expressions in the source and target texts also demonstrates the use of the cognitive translation approach. In accordance with the principle of relevance, the translation always uses target language conceptualisations, but compared to the original, the highlighting of the projection is often different. Also, when original metaphors are complex with a combination of conceptualisations, at least one element of the original metaphor underlies the target expression. This approach perfectly integrates into Lederer’s interpretative model of translation (2006). In this triangular model, the transfer of conceptualisation could be included at the level of ‘deverbalisation/reverbalisation’, the crucial stage of the model, where the extraction of meaning combines with the encyclopedic background to generate the target text.Keywords: cognitive translation, conceptual integration, conceptual metaphor, interpretative model of translation, wine tasting metaphor
Procedia PDF Downloads 131427 The Structure and Development of a Wing Tip Vortex under the Effect of Synthetic Jet Actuation
Authors: Marouen Dghim, Mohsen Ferchichi
Abstract:
The effect of synthetic jet actuation on the roll-up and the development of a wing tip vortex downstream a square-tipped rectangular wing was investigated experimentally using hotwire anemometry. The wing is equipped with a hallow cavity designed to generate a high aspect ratio synthetic jets blowing at an angles with respect to the spanwise direction. The structure of the wing tip vortex under the effect of fluidic actuation was examined at a chord Reynolds number Re_c=8×10^4. An extensive qualitative study on the effect of actuation on the spanwise pressure distribution at c⁄4 was achieved using pressure scanner measurements in order to determine the optimal actuation parameters namely, the blowing momentum coefficient, Cμ, and the non-dimensionalized actuation frequency, F^+. A qualitative study on the effect of actuation parameters on the spanwise pressure distribution showed that optimal actuation frequencies of the synthetic jet were found within the range amplified by both long and short wave instabilities where spanwise pressure coefficients exhibited a considerable decrease by up to 60%. The vortex appeared larger and more diffuse than that of the natural vortex case. Operating the synthetic jet seemed to introduce unsteadiness and turbulence into the vortex core. Based on the ‘a priori’ optimal selected parameters, results of the hotwire wake survey indicated that the actuation achieved a reduction and broadening of the axial velocity deficit. A decrease in the peak tangential velocity associated with an increase in the vortex core radius was reported as a result of the accelerated radial transport of angular momentum. Peak vorticity level near the core was also found to be largely diffused as a direct result of the increased turbulent mixing within the vortex. The wing tip vortex a exhibited a reduced strength and a diffused core as a direct result of increased turbulent mixing due to the presence of turbulent small scale vortices within its core. It is believed that the increased turbulence within the vortex due to the synthetic jet control was the main mechanism associated with the decreased strength and increased size of the wing tip vortex as it evolves downstream. A comparison with a ‘non-optimal’ case was included to demonstrate the effectiveness of selecting the appropriate control parameters. The Synthetic Jet will be operated at various actuation configurations and an extensive parametric study is projected to determine the optimal actuation parameters.Keywords: flow control, hotwire anemometry, synthetic jet, wing tip vortex
Procedia PDF Downloads 436426 Maintenance Optimization for a Multi-Component System Using Factored Partially Observable Markov Decision Processes
Authors: Ipek Kivanc, Demet Ozgur-Unluakin
Abstract:
Over the past years, technological innovations and advancements have played an important role in the industrial world. Due to technological improvements, the degree of complexity of the systems has increased. Hence, all systems are getting more uncertain that emerges from increased complexity, resulting in more cost. It is challenging to cope with this situation. So, implementing efficient planning of maintenance activities in such systems are getting more essential. Partially Observable Markov Decision Processes (POMDPs) are powerful tools for stochastic sequential decision problems under uncertainty. Although maintenance optimization in a dynamic environment can be modeled as such a sequential decision problem, POMDPs are not widely used for tackling maintenance problems. However, they can be well-suited frameworks for obtaining optimal maintenance policies. In the classical representation of the POMDP framework, the system is denoted by a single node which has multiple states. The main drawback of this classical approach is that the state space grows exponentially with the number of state variables. On the other side, factored representation of POMDPs enables to simplify the complexity of the states by taking advantage of the factored structure already available in the nature of the problem. The main idea of factored POMDPs is that they can be compactly modeled through dynamic Bayesian networks (DBNs), which are graphical representations for stochastic processes, by exploiting the structure of this representation. This study aims to demonstrate how maintenance planning of dynamic systems can be modeled with factored POMDPs. An empirical maintenance planning problem of a dynamic system consisting of four partially observable components deteriorating in time is designed. To solve the empirical model, we resort to Symbolic Perseus solver which is one of the state-of-the-art factored POMDP solvers enabling approximate solutions. We generate some more predefined policies based on corrective or proactive maintenance strategies. We execute the policies on the empirical problem for many replications and compare their performances under various scenarios. The results show that the computed policies from the POMDP model are superior to the others. Acknowledgment: This work is supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) under grant no: 117M587.Keywords: factored representation, maintenance, multi-component system, partially observable Markov decision processes
Procedia PDF Downloads 135425 Modulating Photoelectrochemical Water-Splitting Activity by Charge-Storage Capacity of Electrocatalysts
Authors: Yawen Dai, Ping Cheng, Jian Ru Gong
Abstract:
Photoelctrochemical (PEC) water splitting using semiconductors (SCs) provides a convenient way to convert sustainable but intermittent solar energy into clean hydrogen energy, and it has been regarded as one of most promising technology to solve the energy crisis and environmental pollution in modern society. However, the record energy conversion efficiency of a PEC cell (~3%) is still far lower than the commercialization requirement (~10%). The sluggish kinetics of oxygen evolution reaction (OER) half reaction on photoanodes is a significant limiting factor of the PEC device efficiency, and electrocatalysts (ECs) are always deposited on SCs to accelerate the hole injection for OER. However, an active EC cannot guarantee enhanced PEC performance, since the newly emerged SC-EC interface complicates the interfacial charge behavior. Herein, α-Fe2O3 photoanodes coated with Co3O4 and CoO ECs are taken as the model system to glean fundamental understanding on the EC-dependent interfacial charge behavior. Intensity modulated photocurrent spectroscopy and electrochemical impedance spectroscopy were used to investigate the competition between interfacial charge transfer and recombination, which was found to be dominated by the charge storage capacities of ECs. The combined results indicate that both ECs can store holes and increase the hole density on photoanode surface. It is like a double-edged sword that benefit the multi-hole participated OER, as well as aggravate the SC-EC interfacial charge recombination due to the Coulomb attraction, thus leading to a nonmonotonic PEC performance variation trend with the increasing surface hole density. Co3O4 has low hole storage capacity which brings limited interfacial charge recombination, and thus the increased surface holes can be efficiently utilized for OER to generate enhanced photocurrent. In contrast, CoO has overlarge hole storage capacity that causes severe interfacial charge recombination, which hinders hole transfer to electrolyte for OER. Therefore, the PEC performance of α-Fe2O3 is improved by Co3O4 but decreased by CoO despite the similar electrocatalytic activity of the two ECs. First-principle calculation was conducted to further reveal how the charge storage capacity depends on the EC’s intrinsic property, demonstrating that the larger hole storage capacity of CoO than that of Co3O4 is determined by their Co valence states and original Fermi levels. This study raises up a new strategy to manipulate interfacial charge behavior and the resultant PEC performance by the charge storage capacity of ECs, providing insightful guidance for the interface design in PEC devices.Keywords: charge storage capacity, electrocatalyst, interfacial charge behavior, photoelectrochemistry, water-splitting
Procedia PDF Downloads 141424 Process Optimization for 2205 Duplex Stainless Steel by Laser Metal Deposition
Authors: Siri Marthe Arbo, Afaf Saai, Sture Sørli, Mette Nedreberg
Abstract:
This work aims to establish a reliable approach for optimizing a Laser Metal Deposition (LMD) process for a critical maritime component, based on the material properties and structural performance required by the maritime industry. The component of interest is a water jet impeller, for which specific requirements for material properties are defined. The developed approach is based on the assessment of the effects of LMD process parameters on microstructure and material performance of standard AM 2205 duplex stainless steel powder. Duplex stainless steel offers attractive properties for maritime applications, combining high strength, enhanced ductility and excellent corrosion resistance due to the specific amounts of ferrite and austenite. These properties are strongly affected by the microstructural characteristics in addition to microstructural defects such as porosity and welding defects, all strongly influenced by the chosen LMD process parameters. In this study, the influence of deposition speed and heat input was evaluated. First, the influences of deposition speed and heat input on the microstructure characteristics, including ferrite/austenite fraction, amount of porosity and welding defects, were evaluated. Then, the achieved mechanical properties were evaluated by standard testing methods, measuring the hardness, tensile strength and elongation, bending force and impact energy. The measured properties were compared to the requirements of the water jet impeller. The results show that the required amounts of ferrite and austenite can be achieved directly by the LMD process without post-weld heat treatments. No intermetallic phases were observed in the material produced by the investigated process parameters. A high deposition speed was found to reduce the ductility due to the formation of welding defects. An increased heat input was associated with reduced strength due to the coarsening of the ferrite/austenite microstructure. The microstructure characterizations and measured mechanical performance demonstrate the great potential of the LMD process and generate a valuable database for the optimization of the LMD process for duplex stainless steels.Keywords: duplex stainless steel, laser metal deposition, process optimization, microstructure, mechanical properties
Procedia PDF Downloads 218423 Constitutive Flo1p Expression on Strains Bearing Deletions in Genes Involved in Cell Wall Biogenesis
Authors: Lethukuthula Ngobese, Abin Gupthar, Patrick Govender
Abstract:
The ability of yeast cell wall-derived mannoproteins (glycoproteins) to positively contribute to oenological properties has been a key factor that stimulates research initiatives into these industrially important glycoproteins. In addition, and from a fundamental research perspective, yeast cell wall glycoproteins are involved in a wide range of biological interactions. To date, and to the best of our knowledge, our understanding of the fine molecular structure of these mannoproteins is fairly limited. Generally, the amino acid sequences of their protein moieties have been established from structural and functional analysis of the genomic sequence of these yeasts whilst far less information is available on the glycosyl moieties of these mannoproteins. A novel strategy was devised in this study that entails the genetic engineering of yeast strains that over-express and release cell wall-associated glycoproteins into the liquid growth medium. To this end, the Flo1p mannoprotein was overexpressed in Saccharomyces cerevisiae laboratory strains bearing a specific deletion in KNR4 and GPI7 genes involved in cell wall biosynthesis that have been previously shown to extracellularly hyper-secrete cell wall-associated glycoproteins. A polymerase chain reaction (PCR) -based cloning strategy was employed to generate transgenic yeast strains in which the native cell wall FLO1 glycoprotein-encoding gene is brought under transcriptional control of the constitutive PGK1 promoter. The modified Helm’s flocculation assay was employed to assess flocculation intensities of a Flo1p over-expressing wild type and deletion mutant as an indirect measure of their abilities to release the desired mannoprotein. The flocculation intensities of the transformed strains were assessed and all the strains showed similar intensities (>98% flocculation). To assess if mannoproteins were released into the growth medium, the supernatant of each strain was subjected to the BCA protein assay and the transformed Δknr4 strain showed a considerable increase in protein levels. This study has the potential to produce mannoproteins in sufficient quantities that may be employed in future investigations to understand their molecular structures and mechanisms of interaction to the benefit of both fundamental and industrial applications.Keywords: glycoproteins, genetic engineering, flocculation, over-expression
Procedia PDF Downloads 415422 Volume Estimation of Trees: An Exploratory Study on Pterocarpus erinaceus Logging Operations within Forest Transition and Savannah Ecological Zones of Ghana
Authors: Albert Kwabena Osei Konadu
Abstract:
Pterocarpus erinaceus, also known as Rosewood, is tropical wood, endemic in forest savannah transition zones within the middle and northern portion of Ghana. Its economic viability has made it increasingly popular and in high demand, leading to widespread conservation concerns. Ghana’s forest resource management regime for these ecozones is mainly on conservation and very little on resource utilization. Consequently, commercial logging management standards are at teething stage and not fully developed, leading to a deficiency in the monitoring of logging operations and quantification of harvested trees volumes. Tree information form (TIF); a volume estimation and tracking regime, has proven to be an effective, sustainable management tool for regulating timber resource extraction in the high forest zones of the country. This work aims to generate TIF that can track and capture requisite parameters to accurately estimate the volume of harvested rosewood within forest savannah transition zones. Tree information forms were created on three scenarios of individual billets, stacked billets and conveying vessel basis. These TIFs were field-tested to deduce the most viable option for the tracking and estimation of harvested volumes of rosewood using the smallian and cubic volume estimation formula. Overall, four districts were covered with individual billets, stacked billets and conveying vessel scenarios registering mean volumes of 25.83m3,45.08m3 and 32.6m3, respectively. These adduced volumes were validated by benchmarking to assigned volumes of the Forestry Commission of Ghana and known standard volumes of conveying vessels. The results did indicate an underestimation of extracted volumes under the quotas regime, a situation that could lead to unintended overexploitation of the species. The research revealed conveying vessels route is the most viable volume estimation and tracking regime for the sustainable management of the Pterocarpous erinaceus species as it provided a more practical volume estimate and data extraction protocol.Keywords: convention on international trade in endangered species, cubic volume formula, forest transition savannah zones, pterocarpus erinaceus, smallian’s volume formula, tree information form
Procedia PDF Downloads 107421 How Is a Machine-Translated Literary Text Organized in Coherence? An Analysis Based upon Theme-Rheme Structure
Abstract:
With the ultimate goal to automatically generate translated texts with high quality, machine translation has made tremendous improvements. However, its translations of literary works are still plagued with problems in coherence, esp. the translation between distant language pairs. One of the causes of the problems is probably the lack of linguistic knowledge to be incorporated into the training of machine translation systems. In order to enable readers to better understand the problems of machine translation in coherence, to seek out the potential knowledge to be incorporated, and thus to improve the quality of machine translation products, this study applies Theme-Rheme structure to examine how a machine-translated literary text is organized and developed in terms of coherence. Theme-Rheme structure in Systemic Functional Linguistics is a useful tool for analysis of textual coherence. Theme is the departure point of a clause and Rheme is the rest of the clause. In a text, as Themes and Rhemes may be connected with each other in meaning, they form thematic and rhematic progressions throughout the text. Based on this structure, we can look into how a text is organized and developed in terms of coherence. Methodologically, we chose Chinese and English as the language pair to be studied. Specifically, we built a comparable corpus with two modes of English translations, viz. machine translation (MT) and human translation (HT) of one Chinese literary source text. The translated texts were annotated with Themes, Rhemes and their progressions throughout the texts. The annotated texts were analyzed from two respects, the different types of Themes functioning differently in achieving coherence, and the different types of thematic and rhematic progressions functioning differently in constructing texts. By analyzing and contrasting the two modes of translations, it is found that compared with the HT, 1) the MT features “pseudo-coherence”, with lots of ill-connected fragments of information using “and”; 2) the MT system produces a static and less interconnected text that reads like a list; these two points, in turn, lead to the less coherent organization and development of the MT than that of the HT; 3) novel to traditional and previous studies, Rhemes do contribute to textual connection and coherence though less than Themes do and thus are worthy of notice in further studies. Hence, the findings suggest that Theme-Rheme structure be applied to measuring and assessing the coherence of machine translation, to being incorporated into the training of the machine translation system, and Rheme be taken into account when studying the textual coherence of both MT and HT.Keywords: coherence, corpus-based, literary translation, machine translation, Theme-Rheme structure
Procedia PDF Downloads 207420 Jurisdictional Federalism and Formal Federalism: Levels of Political Centralization on American and Brazilian Models
Authors: Henrique Rangel, Alexandre Fadel, Igor De Lazari, Bianca Neri, Carlos Bolonha
Abstract:
This paper promotes a comparative analysis of American and Brazilian models of federalism assuming their levels of political centralization as main criterion. The central problem faced herein is the Brazilian approach of Unitarian regime. Although the hegemony of federative form after 1989, Brazil had a historical frame of political centralization that remains under the 1988 constitutional regime. Meanwhile, United States framed a federalism in which States absorb significant authorities. The hypothesis holds that the amount of alternative criteria of federalization – which can generate political centralization –, and the way they are upheld on judicial review, are crucial to understand the levels of political centralization achieved in each model. To test this hypothesis, the research is conducted by a methodology temporally delimited to 1994-2014 period. Three paradigmatic precedents of U.S. Supreme Court were selected: United States vs. Morrison (2000), on gender-motivated violence, Gonzales vs. Raich (2005), on medical use of marijuana, and United States vs. Lopez (1995), on firearm possession on scholar zones. These most relevant cases over federalism in the recent activity of Supreme Court indicates a determinant parameter of deliberation: the commerce clause. After observe the criterion used to permit or prohibit the political centralization in America, the Brazilian normative context is presented. In this sense, it is possible to identify the eventual legal treatment these controversies could receive in this Country. The decision-making reveals some deliberative parameters, which characterizes each federative model. At the end of research, the precedents of Rehnquist Court promote a broad revival of federalism debate, establishing the commerce clause as a secure criterion to uphold or not the necessity of centralization – even with decisions considered conservative. Otherwise, the Brazilian federalism solves them controversies upon in a formalist fashion, within numerous and comprehensive – sometimes casuistic too – normative devices, oriented to make an intense centralization. The aim of this work is indicate how jurisdictional federalism found in United States can preserve a consistent model with States robustly autonomous, while Brazil gives preference to normative mechanisms designed to starts from centralization.Keywords: constitutional design, federalism, U.S. Supreme Court, legislative authority
Procedia PDF Downloads 516419 Na Doped ZnO UV Filters with Reduced Photocatalytic Activity for Sunscreen Application
Authors: Rafid Mueen, Konstantin Konstantinov, Micheal Lerch, Zhenxiang Cheng
Abstract:
In the past two decades, the concern for skin protection from ultraviolet (UV) radiation has attracted considerable attention due to the increased intensity of UV rays that can reach the Earth’s surface as a result of the breakdown of ozone layer. Recently, UVA has also attracted attention, since, in comparison to UVB, it can penetrate deeply into the skin, which can result in significant health concerns. Sunscreen agents are one of the significant tools to protect the skin from UV irradiation, and it is either organic or in organic. Developing of inorganic UV blockers is essential, which provide efficient UV protection over a wide spectrum rather than organic filters. Furthermore inorganic UV blockers are good comfort, and high safety when applied on human skin. Inorganic materials can absorb, reflect, or scatter the ultraviolet radiation, depending on their particle size, unlike the organic blockers, which absorb the UV irradiation. Nowadays, most inorganic UV-blocking filters are based on (TiO2) and ZnO). ZnO can provide protection in the UVA range. Indeed, ZnO is attractive for in sunscreen formulization, and this relates to many advantages, such as its modest refractive index (2.0), absorption of a small fraction of solar radiation in the UV range which is equal to or less than 385 nm, its high probable recombination of photogenerated carriers (electrons and holes), large direct band gap, high exciton binding energy, non-risky nature, and high tendency towards chemical and physical stability which make it transparent in the visible region with UV protective activity. A significant issue for ZnO use in sunscreens is that it can generate ROS in the presence of UV light because of its photocatalytic activity. Therefore it is essential to make a non-photocatalytic material through modification by other metals. Several efforts have been made to deactivate the photocatalytic activity of ZnO by using inorganic surface modifiers. The doping of ZnO by different metals is another way to modify its photocatalytic activity. Recently, successful doping of ZnO with different metals such as Ce, La, Co, Mn, Al, Li, Na, K, and Cr by various procedures, such as a simple and facile one pot water bath, co-precipitation, hydrothermal, solvothermal, combustion, and sol gel methods has been reported. These materials exhibit greater performance than undoped ZnO towards increasing the photocatalytic activity of ZnO in visible light. Therefore, metal doping can be an effective technique to modify the ZnO photocatalytic activity. However, in the current work, we successfully reduce the photocatalytic activity of ZnO through Na doped ZnO fabricated via sol-gel and hydrothermal methods.Keywords: photocatalytic, ROS, UVA, ZnO
Procedia PDF Downloads 144418 Integrating Virtual Reality and Building Information Model-Based Quantity Takeoffs for Supporting Construction Management
Authors: Chin-Yu Lin, Kun-Chi Wang, Shih-Hsu Wang, Wei-Chih Wang
Abstract:
A construction superintendent needs to know not only the amount of quantities of cost items or materials completed to develop a daily report or calculate the daily progress (earned value) in each day, but also the amount of quantities of materials (e.g., reinforced steel and concrete) to be ordered (or moved into the jobsite) for performing the in-progress or ready-to-start construction activities (e.g., erection of reinforced steel and concrete pouring). These daily construction management tasks require great effort in extracting accurate quantities in a short time (usually must be completed right before getting off work every day). As a result, most superintendents can only provide these quantity data based on either what they see on the site (high inaccuracy) or the extraction of quantities from two-dimension (2D) construction drawings (high time consumption). Hence, the current practice of providing the amount of quantity data completed in each day needs improvement in terms of more accuracy and efficiency. Recently, a three-dimension (3D)-based building information model (BIM) technique has been widely applied to support construction quantity takeoffs (QTO) process. The capability of virtual reality (VR) allows to view a building from the first person's viewpoint. Thus, this study proposes an innovative system by integrating VR (using 'Unity') and BIM (using 'Revit') to extract quantities to support the above daily construction management tasks. The use of VR allows a system user to be present in a virtual building to more objectively assess the construction progress in the office. This VR- and BIM-based system is also facilitated by an integrated database (consisting of the information and data associated with the BIM model, QTO, and costs). In each day, a superintendent can work through a BIM-based virtual building to quickly identify (via a developed VR shooting function) the building components (or objects) that are in-progress or finished in the jobsite. And he then specifies a percentage (e.g., 20%, 50% or 100%) of completion of each identified building object based on his observation on the jobsite. Next, the system will generate the completed quantities that day by multiplying the specified percentage by the full quantities of the cost items (or materials) associated with the identified object. A building construction project located in northern Taiwan is used as a case study to test the benefits (i.e., accuracy and efficiency) of the proposed system in quantity extraction for supporting the development of daily reports and the orders of construction materials.Keywords: building information model, construction management, quantity takeoffs, virtual reality
Procedia PDF Downloads 132417 Logistical Optimization of Nuclear Waste Flows during Decommissioning
Authors: G. Dottavio, M. F. Andrade, F. Renard, V. Cheutet, A.-L. Ladier, S. Vercraene, P. Hoang, S. Briet, R. Dachicourt, Y. Baizet
Abstract:
An important number of technological equipment and high-skilled workers over long periods of time have to be mobilized during nuclear decommissioning processes. The related operations generate complex flows of waste and high inventory levels, associated to information flows of heterogeneous types. Taking into account that more than 10 decommissioning operations are on-going in France and about 50 are expected toward 2025: A big challenge is addressed today. The management of decommissioning and dismantling of nuclear installations represents an important part of the nuclear-based energy lifecycle, since it has an environmental impact as well as an important influence on the electricity cost and therefore the price for end-users. Bringing new technologies and new solutions into decommissioning methodologies is thus mandatory to improve the quality, cost and delay efficiency of these operations. The purpose of our project is to improve decommissioning management efficiency by developing a decision-support framework dedicated to plan nuclear facility decommissioning operations and to optimize waste evacuation by means of a logistic approach. The target is to create an easy-to-handle tool capable of i) predicting waste flows and proposing the best decommissioning logistics scenario and ii) managing information during all the steps of the process and following the progress: planning, resources, delays, authorizations, saturation zones, waste volume, etc. In this article we present our results from waste nuclear flows simulation during decommissioning process, including discrete-event simulation supported by FLEXSIM 3-D software. This approach was successfully tested and our works confirms its ability to improve this type of industrial process by identifying the critical points of the chain and optimizing it by identifying improvement actions. This type of simulation, executed before the start of the process operations on the basis of a first conception, allow ‘what-if’ process evaluation and help to ensure quality of the process in an uncertain context. The simulation of nuclear waste flows before evacuation from the site will help reducing the cost and duration of the decommissioning process by optimizing the planning and the use of resources, transitional storage and expensive radioactive waste containers. Additional benefits are expected for the governance system of the waste evacuation since it will enable a shared responsibility of the waste flows.Keywords: nuclear decommissioning, logistical optimization, decision-support framework, waste management
Procedia PDF Downloads 323416 Demographic Determinants of Spatial Patterns of Urban Crime
Authors: Natalia Sypion-Dutkowska
Abstract:
Abstract — The main research objective of the paper is to discover the relationship between the age groups of residents and crime in particular districts of a large city. The basic analytical tool is specific crime rates, calculated not in relation to the total population, but for age groups in a different social situation - property, housing, work, and representing different generations with different behavior patterns. They are the communities from which criminals and victims of crimes come. The analysis of literature and national police reports gives rise to hypotheses about the ability of a given age group to generate crime as a source of offenders and as a group of victims. These specific indicators are spatially differentiated, which makes it possible to detect socio-demographic determinants of spatial patterns of urban crime. A multi-feature classification of districts was also carried out, in which specific crime rates are the diagnostic features. In this way, areas with a similar structure of socio-demographic determinants of spatial patterns on urban crime were designated. The case study is the city of Szczecin in Poland. It has about 400,000 inhabitants and its area is about 300 sq km. Szczecin is located in the immediate vicinity of Germany and is the economic, academic and cultural capital of the region. It also has a seaport and an airport. Moreover, according to ESPON 2007, Szczecin is the Transnational and National Functional Urban Area. Szczecin is divided into 37 districts - auxiliary administrative units of the municipal government. The population of each of them in 2015-17 was divided into 8 age groups: babes (0-2 yrs.), children (3-11 yrs.), teens (12-17 yrs.), younger adults (18-30 yrs.), middle-age adults (31-45 yrs.), older adults (46-65 yrs.), early older (66-80) and late older (from 81 yrs.). The crimes reported in 2015-17 in each of the districts were divided into 10 groups: fights and beatings, other theft, car theft, robbery offenses, burglary into an apartment, break-in into a commercial facility, car break-in, break-in into other facilities, drug offenses, property damage. In total, 80 specific crime rates have been calculated for each of the districts. The analysis was carried out on an intra-city scale, this is a novel approach as this type of analysis is usually carried out at the national or regional level. Another innovative research approach is the use of specific crime rates in relation to age groups instead of standard crime rates. Acknowledgments: This research was funded by the National Science Centre, Poland, registration number 2019/35/D/HS4/02942.Keywords: age groups, determinants of crime, spatial crime pattern, urban crime
Procedia PDF Downloads 171415 Comparative Therapeutic Potential of 'Green Synthesized' Antimicrobials against Scalp Infections
Authors: D. Desai, J.Dixon, N. Jain, M. Datta
Abstract:
Microbial infections of scalp consist of symptomatic appearances associated with seborrhoeic dermatitis, folliculitis, furuncles, carbuncles and ringworm. The main causative organisms in these scalp-based infections are bacteria like S. aureus, P. aeruginosa and a fungus M. Furfur. Allopathic treatment of these infections is available and efficient, but occasionally, topical applications have been found to cause side effects. India is known as the botanical garden of the world and considered as the epicentre for utilization of traditional drugs. Many treatments based on herb extracts are commonly used in India. It has been observed treatment with ethnomedicines requires a higher dosage and greater time period. Additionally, repeated applications are required to obtain the full efficacy of the treatment. An attempt has been made to imbibe the traditional knowledge with nanotechnology to generate a proficient therapeutic against scalp infections. We have imbibed metallic nanoparticles with extracts from traditional medicines and propose to formulate an antimicrobial hair massager. Four commonly used herbs for treatment against scalp disorders like Zingiber officinale (ginger), Allium sativum (garlic), Azadirachta indica (neem) leaves and Citrus limon (lemon) peel was taken. 30 gms of dried homogenized powder was obtained and processed for obtaining the aqueous and ethanolic extract by soxhlet apparatus. The extract was dried and reconstituted to obtain working solution of 1mg/ml. Phytochemical analysis for the obtained extract was done. Synthesis of nanoparticles was mediated by incubating 1mM silver nitrate with extracts of various herbs to obtain silver nanoparticles. The formation of the silver nanoparticles (AgNPs) was monitored using UV-Vis spectroscopy. The AgNPs thus obtained were centrifuged and dried. The AgNPs thus formed were characterized by X Ray Diffraction, scanning electron microscopy and transmission electron microscopy. The size of the AgNPs varied from 10-20 nm and was spherical in shape. P. aeruginosa was plated on nutrient agar and comparative antibacterial activity was tested. Comparative antimicrobial potential was calculated for the extracts and the corresponding nanoconstructs. It was found AgNPs were more efficient than their aqueous and ethanolic counterparts except in the ase of C. limon. Statistical analysis was performed to validate the results obtained.Keywords: ethnomedicine, nanoconstructs, scalp infections, Zingiber officinale
Procedia PDF Downloads 368