Search results for: soft sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2188

Search results for: soft sets

298 Offshore Wind Assessment and Analysis for South Western Mediterranean Sea

Authors: Abdallah Touaibia, Nachida Kasbadji Merzouk, Mustapha Merzouk, Ryma Belarbi

Abstract:

accuracy assessment and a better understand of the wind resource distribution are the most important tasks for decision making before installing wind energy operating systems in a given region, there where our interest come to the Algerian coastline and its Mediterranean sea area. Despite its large coastline overlooking the border of Mediterranean Sea, there is still no strategy encouraging the development of offshore wind farms in Algerian waters. The present work aims to estimate the offshore wind fields for the Algerian Mediterranean Sea based on wind data measurements ranging from 1995 to 2018 provided of 24 years of measurement by seven observation stations focusing on three coastline cities in Algeria under a different measurement time step recorded from 30 min, 60 min, and 180 min variate from one to each other, two stations in Spain, two other ones in Italy and three in the coast of Algeria from the east Annaba, at the center Algiers, and to Oran taken place at the west of it. The idea behind consists to have multiple measurement points that helping to characterize this area in terms of wind potential by the use of interpolation method of their average wind speed values between these available data to achieve the approximate values of others locations where aren’t any available measurement because of the difficulties against the implementation of masts within the deep depth water. This study is organized as follow: first, a brief description of the studied area and its climatic characteristics were done. After that, the statistical properties of the recorded data were checked by evaluating wind histograms, direction roses, and average speeds using MatLab programs. Finally, ArcGIS and MapInfo soft-wares were used to establish offshore wind maps for better understanding the wind resource distribution, as well as to identify windy sites for wind farm installation and power management. The study pointed out that Cap Carbonara is the windiest site with an average wind speed of 7.26 m/s at 10 m, inducing a power density of 902 W/m², then the site of Cap Caccia with 4.88 m/s inducing a power density of 282 W/m². The average wind speed of 4.83 m/s is occurred for the site of Oran, inducing a power density of 230 W/m². The results indicated also that the dominant wind direction where the frequencies are highest for the site of Cap Carbonara is the West with 34%, an average wind speed of 9.49 m/s, and a power density of 1722 W/m². Then comes the site of Cap Caccia, where the prevailing wind direction is the North-west, about 20% and 5.82 m/s occurring a power density of 452 W/m². The site of Oran comes in third place with the North dominant direction with 32% inducing an average wind speed of 4.59 m/s and power density of 189 W/m². It also shown that the proposed method is either crucial in understanding wind resource distribution for revealing windy sites over a large area and more effective for wind turbines micro-siting.

Keywords: wind ressources, mediterranean sea, offshore, arcGIS, mapInfo, wind maps, wind farms

Procedia PDF Downloads 140
297 Life Cycle Datasets for the Ornamental Stone Sector

Authors: Isabella Bianco, Gian Andrea Blengini

Abstract:

The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.

Keywords: life cycle assessment, LCA datasets, ornamental stone, stone environmental impact

Procedia PDF Downloads 228
296 Predicting Photovoltaic Energy Profile of Birzeit University Campus Based on Weather Forecast

Authors: Muhammad Abu-Khaizaran, Ahmad Faza’, Tariq Othman, Yahia Yousef

Abstract:

This paper presents a study to provide sufficient and reliable information about constructing a Photovoltaic energy profile of the Birzeit University campus (BZU) based on the weather forecast. The developed Photovoltaic energy profile helps to predict the energy yield of the Photovoltaic systems based on the weather forecast and hence helps planning energy production and consumption. Two models will be developed in this paper; a Clear Sky Irradiance model and a Cloud-Cover Radiation model to predict the irradiance for a clear sky day and a cloudy day, respectively. The adopted procedure for developing such models takes into consideration two levels of abstraction. First, irradiance and weather data were acquired by a sensory (measurement) system installed on the rooftop of the Information Technology College building at Birzeit University campus. Second, power readings of a fully operational 51kW commercial Photovoltaic system installed in the University at the rooftop of the adjacent College of Pharmacy-Nursing and Health Professions building are used to validate the output of a simulation model and to help refine its structure. Based on a comparison between a mathematical model, which calculates Clear Sky Irradiance for the University location and two sets of accumulated measured data, it is found that the simulation system offers an accurate resemblance to the installed PV power station on clear sky days. However, these comparisons show a divergence between the expected energy yield and actual energy yield in extreme weather conditions, including clouding and soiling effects. Therefore, a more accurate prediction model for irradiance that takes into consideration weather factors, such as relative humidity and cloudiness, which affect irradiance, was developed; Cloud-Cover Radiation Model (CRM). The equivalent mathematical formulas implement corrections to provide more accurate inputs to the simulation system. The results of the CRM show a very good match with the actual measured irradiance during a cloudy day. The developed Photovoltaic profile helps in predicting the output energy yield of the Photovoltaic system installed at the University campus based on the predicted weather conditions. The simulation and practical results for both models are in a very good match.

Keywords: clear-sky irradiance model, cloud-cover radiation model, photovoltaic, weather forecast

Procedia PDF Downloads 127
295 Applicability of Polyisobutylene-Based Polyurethane Structures in Biomedical Disciplines: Some Calcification and Protein Adsorption Studies

Authors: Nihan Nugay, Nur Cicek Kekec, Kalman Toth, Turgut Nugay, Joseph P. Kennedy

Abstract:

In recent years, polyurethane structures are paving the way for elastomer usage in biology, human medicine, and biomedical application areas. Polyurethanes having a combination of high oxidative and hydrolytic stability and excellent mechanical properties are focused due to enhancing the usage of PUs especially for implantable medical device application such as cardiac-assist. Currently, unique polyurethanes consisting of polyisobutylenes as soft segments and conventional hard segments, named as PIB-based PUs, are developed with precise NCO/OH stoichiometry (∽1.05) for obtaining PIB-based PUs with enhanced properties (i.e., tensile stress increased from ∽11 to ∽26 MPa and elongation from ∽350 to ∽500%). Static and dynamic mechanical properties were optimized by examining stress-strain graphs, self-organization and crystallinity (XRD) traces, rheological (DMA, creep) profiles and thermal (TGA, DSC) responses. Annealing procedure was applied for PIB-based PUs. Annealed PIB-based PU shows ∽26 MPa tensile strength, ∽500% elongation, and ∽77 Microshore hardness with excellent hydrolytic and oxidative stability. The surface characters of them were examined with AFM and contact angle measurements. Annealed PIB-based PU exhibits the higher segregation of individual segments and surface hydrophobicity thus annealing significantly enhances hydrolytic and oxidative stability by shielding carbamate bonds by inert PIB chains. According to improved surface and microstructure characters, greater efforts are focused on analyzing protein adsorption and calcification profiles. In biomedical applications especially for cardiological implantations, protein adsorption inclination on polymeric heart valves is undesirable hence protein adsorption from blood serum is followed by platelet adhesion and subsequent thrombus formation. The protein adsorption character of PIB-based PU examines by applying Bradford assay in fibrinogen and bovine serum albumin solutions. Like protein adsorption, calcium deposition on heart valves is very harmful because vascular calcification has been proposed activation of osteogenic mechanism in the vascular wall, loss of inhibitory factors, enhance bone turnover and irregularities in mineral metabolism. The calcium deposition on films are characterized by incubating samples in simulated body fluid solution and examining SEM images and XPS profiles. PIB-based PUs are significantly more resistant to hydrolytic-oxidative degradation, protein adsorption and calcium deposition than ElastEonTM E2A, a commercially available PDMS-based PU, widely used for biomedical applications.

Keywords: biomedical application, calcification, polyisobutylene, polyurethane, protein adsorption

Procedia PDF Downloads 256
294 Modelling Flood Events in Botswana (Palapye) for Protecting Roads Structure against Floods

Authors: Thabo M. Bafitlhile, Adewole Oladele

Abstract:

Botswana has been affected by floods since long ago and is still experiencing this tragic event. Flooding occurs mostly in the North-West, North-East, and parts of Central district due to heavy rainfalls experienced in these areas. The torrential rains destroyed homes, roads, flooded dams, fields and destroyed livestock and livelihoods. Palapye is one area in the central district that has been experiencing floods ever since 1995 when its greatest flood on record occurred. Heavy storms result in floods and inundation; this has been exacerbated by poor and absence of drainage structures. Since floods are a part of nature, they have existed and will to continue to exist, hence more destruction. Furthermore floods and highway plays major role in erosion and destruction of roads structures. Already today, many culverts, trenches, and other drainage facilities lack the capacity to deal with current frequency for extreme flows. Future changes in the pattern of hydro climatic events will have implications for the design and maintenance costs of roads. Increase in rainfall and severe weather events can affect the demand for emergent responses. Therefore flood forecasting and warning is a prerequisite for successful mitigation of flood damage. In flood prone areas like Palapye, preventive measures should be taken to reduce possible adverse effects of floods on the environment including road structures. Therefore this paper attempts to estimate return periods associated with huge storms of different magnitude from recorded historical rainfall depth using statistical method. The method of annual maxima was used to select data sets for the rainfall analysis. In the statistical method, the Type 1 extreme value (Gumbel), Log Normal, Log Pearson 3 distributions were all applied to the annual maximum series for Palapye area to produce IDF curves. The Kolmogorov-Smirnov test and Chi Squared were used to confirm the appropriateness of fitted distributions for the location and the data do fit the distributions used to predict expected frequencies. This will be a beneficial tool for urgent flood forecasting and water resource administration as proper drainage design will be design based on the estimated flood events and will help to reclaim and protect the road structures from adverse impacts of flood.

Keywords: drainage, estimate, evaluation, floods, flood forecasting

Procedia PDF Downloads 366
293 Actionable Personalised Learning Strategies to Improve a Growth-Mindset in an Educational Setting Using Artificial Intelligence

Authors: Garry Gorman, Nigel McKelvey, James Connolly

Abstract:

This study will evaluate a growth mindset intervention with Junior Cycle Coding and Senior Cycle Computer Science students in Ireland, where gamification will be used to incentivise growth mindset behaviour. An artificial intelligence (AI) driven personalised learning system will be developed to present computer programming learning tasks in a manner that is best suited to the individuals’ own learning preferences while incentivising and rewarding growth mindset behaviour of persistence, mastery response to challenge, and challenge seeking. This research endeavours to measure mindset with before and after surveys (conducted nationally) and by recording growth mindset behaviour whilst playing a digital game. This study will harness the capabilities of AI and aims to determine how a personalised learning (PL) experience can impact the mindset of a broad range of students. The focus of this study will be to determine how personalising the learning experience influences female and disadvantaged students' sense of belonging in the computer science classroom when tasks are presented in a manner that is best suited to the individual. Whole Brain Learning will underpin this research and will be used as a framework to guide the research in identifying key areas such as thinking and learning styles, cognitive potential, motivators and fears, and emotional intelligence. This research will be conducted in multiple school types over one academic year. Digital games will be played multiple times over this period, and the data gathered will be used to inform the AI algorithm. The three data sets are described as follows: (i) Before and after survey data to determine the grit scores and mindsets of the participants, (ii) The Growth Mind-Set data from the game, which will measure multiple growth mindset behaviours, such as persistence, response to challenge and use of strategy, (iii) The AI data to guide PL. This study will highlight the effectiveness of an AI-driven personalised learning experience. The data will position AI within the Irish educational landscape, with a specific focus on the teaching of CS. These findings will benefit coding and computer science teachers by providing a clear pedagogy for the effective delivery of personalised learning strategies for computer science education. This pedagogy will help prevent students from developing a fixed mindset while helping pupils to exhibit persistence of effort, use of strategy, and a mastery response to challenges.

Keywords: computer science education, artificial intelligence, growth mindset, pedagogy

Procedia PDF Downloads 83
292 Predicting Personality and Psychological Distress Using Natural Language Processing

Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi

Abstract:

Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).

Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality

Procedia PDF Downloads 76
291 The Role of Interest Groups in Foreign Policy: Assessing the Influence of the 'Pro-Jakarta Lobby' in Australia and Indonesia's Bilateral Relations

Authors: Bec Strating

Abstract:

This paper examines the ways that domestic politics and pressure–generated through lobbying, public diplomacy campaigns and other tools of soft power-contributes to the formation of short-term and long-term national interests, priorities and strategies of states in their international relations. It primarily addresses the conceptual problems regarding the kinds of influence that lobby groups wield in foreign policy and how this influence might be assessed. Scholarly attention has been paid to influential foreign policy lobbies and interest groups, particularly in the areas of US foreign policy. Less attention has been paid to how lobby groups might influence the foreign policy of a middle power such as Australia. This paper examines some of the methodological complexities in developing and conducting a research project that can measure the nature and influence of lobbies on foreign affairs priorities and activities. This paper will use Australian foreign policy in the context of its historical bilateral relationship with Indonesia as a case study for considering the broader issues of domestic influences on foreign policy. Specifically, this paper will use the so-called ‘pro-Jakarta lobby’ as an example of an interest group. The term ‘pro-Jakarta lobby’ is used in media commentary and scholarship to describe an amorphous collection of individuals who have sought to influence Australian foreign policy in favour of Indonesia. The term was originally applied to a group of Indonesian experts at the Australian National University in the 1980s but expanded to include journalists, think tanks and key diplomats. The concept of the ‘pro-Jakarta lobby’ was developed largely through criticisms of Australia’s support for Indonesia’s sovereignty of East Timor and West Papua. Pro-Independence supporters were integral for creating the ‘lobby’ in their rhetoric and criticisms about the influence on Australian foreign policy. In these critical narratives, the ‘pro-Jakarta lobby’ supported a realist approach to relations with Indonesia during the years of President Suharto’s regime, which saw appeasement of Indonesia as paramount to values of democracy and human rights. The lobby was viewed as integral in embedding a form of ‘foreign policy exceptionalism’ towards Indonesia in Australian policy-making circles. However, little critical and scholarly attention has been paid to nature, aims, strategies and activities of the ‘pro-Jakarta lobby.' This paper engages with methodological issues of foreign policy analysis: what was the ‘pro-Jakarta lobby’? Why was it considered more successful than other activist groups in shaping policy? And how can its influence on Australia’s approach to Indonesia be tested in relation to other contingent factors shaping policy? In addressing these questions, this case study will assist in addressing a broader scholarly concern about the capacities of collectives or individuals in shaping and directing the foreign policies of states.

Keywords: foreign policy, interests groups, Australia, Indonesia

Procedia PDF Downloads 339
290 Changing Employment Relations Practices in Hong Kong: Cases of Two Multinational Retail Banks since 1997

Authors: Teresa Shuk-Ching Poon

Abstract:

This paper sets out to examine the changing employment relations practices in Hong Kong’s retail banking sector over a period of more than 10 years. The major objective of the research is to examine whether and to what extent local institutional influences have overshadowed global market forces in shaping strategic management decisions and employment relations practices in Hong Kong, with a view to drawing implications to comparative employment relations studies. Examining the changing pattern of employment relations, this paper finds the industrial relations strategic choice model (Kochan, McKersie and Cappelli, 1984) appropriate to use as a framework for the study. Four broad aspects of employment relations are examined, including work organisation and job design; staffing and labour adjustment; performance appraisal, compensation and employee development; and labour unions and employment relations. Changes in the employment relations practices in two multinational retail banks operated in Hong Kong are examined in detail. The retail banking sector in Hong Kong is chosen as a case to examine as it is a highly competitive segment in the financial service industry very much susceptible to global market influences. This is well illustrated by the fact that Hong Kong was hit hard by both the Asian and the Global Financial Crises. This sector is also subject to increasing institutional influences, especially after the return of Hong Kong’s sovereignty to the People’s Republic of China (PRC) since 1997. The case study method is used as it is a suitable research design able to capture the complex institutional and environmental context which is the subject-matter to be examined in the paper. The paper concludes that operation of the retail banks in Hong Kong has been subject to both institutional and global market changes at different points in time. Information obtained from the two cases examined tends to support the conclusion that the relative significance of institutional as against global market factors in influencing retail banks’ operation and their employment relations practices is depended very much on the time in which these influences emerged and the scale and intensity of these influences. This case study highlights the importance of placing comparative employment relations studies within a context where employment relations practices in different countries or different regions/cities within the same country could be examined and compared over a longer period of time to make the comparison more meaningful.

Keywords: employment relations, institutional influences, global market forces, strategic management decisions, retail banks, Hong Kong

Procedia PDF Downloads 397
289 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 119
288 Clustering-Based Computational Workload Minimization in Ontology Matching

Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris

Abstract:

In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.

Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching

Procedia PDF Downloads 244
287 Shear Strength Envelope Characteristics of LimeTreated Clays

Authors: Mohammad Moridzadeh, Gholamreza Mesri

Abstract:

The effectiveness of lime treatment of soils has been commonly evaluated in terms of improved workability and increased undrained unconfined compressive strength in connection to road and airfield construction. The most common method of strength measurement has been the unconfined compression test. However, if the objective of lime treatment is to improve long-term stability of first-time or reactivated landslides in stiff clays and shales, permanent changes in the size and shape of clay particles must be realized to increase drained frictional resistance. Lime-soil interactions that may produce less platy and larger soil particles begin and continue with time under the highly alkaline pH environment. In this research, pH measurements are used to monitor chemical environment and progress of reactions. Atterberg limits are measured to identify changes in particle size and shape indirectly. Also, fully softened and residual strength measurements are used to examine an improvement in frictional resistance due to lime-soil interactions. The main variables are soil plasticity and mineralogy, lime content, water content, and curing period. Lime effect on frictional resistance is examined using samples of clays with different mineralogy and characteristics which may react with lime to various extents. Drained direct shear tests on reconstituted lime-treated clay specimens with various properties have been performed to measure fully softened shear strength. To measure residual shear strength, drained multiple reversal direct shear tests on precut specimens were conducted. This way, soil particles are oriented along the direction of shearing to the maximum possible extent and provide minimum frictional resistance. This is applicable to reactivated and part of first-time landslides. The Brenna clay, which is the highly plastic lacustrine clay of Lake Agassiz causing slope instability along the banks of the Red River, is one of the soil samples used in this study. The Brenna Formation characterized as a uniform, soft to firm, dark grey, glaciolacustrine clay with little or no visible stratification, is full of slickensided surfaces. The major source of sediment for the Brenna Formation was the highly plastic montmorillonitic Pierre Shale bedrock. The other soil used in this study is one of the main sources of slope instability in Harris County Flood Control District (HCFCD), i.e. the Beaumont clay. The shear strengths of untreated and treated clays were obtained under various normal pressures to evaluate the shear envelope nonlinearity.

Keywords: Brenna clay, friction resistance, lime treatment, residual

Procedia PDF Downloads 155
286 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 433
285 Position of the Constitutional Court of the Russian Federation on the Matter of Restricting Constitutional Rights of Citizens Concerning Banking Secrecy

Authors: A. V. Shashkova

Abstract:

The aim of the present article is to analyze the position of the Constitutional Court of the Russian Federation on the matter of restricting the constitutional rights of citizens to inviolability of professional and banking secrecy in effecting controlling activities. The methodological ground of the present Article represents the dialectic scientific method of the socio-political, legal and organizational processes with the principles of development, integrity, and consistency, etc. The consistency analysis method is used while researching the object of the analysis. Some public-private research methods are also used: the formally-logical method or the comparative legal method, are used to compare the understanding of the ‘secrecy’ concept. The aim of the present article is to find the root of the problem and to give recommendations for the solution of the problem. The result of the present research is the author’s conclusion on the necessity of the political will to improve Russian legislation with the aim of compliance with the provisions of the Constitution. It is also necessary to establish a clear balance between the constitutional rights of the individual and the limit of these rights when carrying out various control activities by public authorities. Attempts by the banks to "overdo" an anti-money laundering law under threat of severe sanctions by the regulators actually led to failures in the execution of normal economic activity. Therefore, individuals face huge problems with payments on the basis of clearing, in addition to problems with cash withdrawals. The Bank of Russia sets requirements for banks to execute Federal Law No. 115-FZ too high. It is high place to attract political will here. As well, recent changes in Russian legislation, e.g. allowing banks to refuse opening of accounts unilaterally, simplified banking activities in the country. The article focuses on different theoretical approaches towards the concept of “secrecy”. The author gives an overview of the practices of Spain, Switzerland and the United States of America on the matter of restricting the constitutional rights of citizens to inviolability of professional and banking secrecy in effecting controlling activities. The Constitutional Court of the Russian Federation basing on the Constitution of the Russian Federation has its special understanding of the issue, which should be supported by further legislative development in the Russian Federation.

Keywords: constitutional court, restriction of constitutional rights, bank secrecy, control measures, money laundering, financial control, banking information

Procedia PDF Downloads 182
284 The Regulation of Alternative Dispute Resolution Institutions in Consumer Redress and Enforcement: A South African Perspective

Authors: Jacolien Barnard, Corlia Van Heerden

Abstract:

Effective and accessible consensual dispute resolution and in particular alternative dispute resolution, are central to consumer protection legislation. In this regard, the Consumer Protection Act 68 of 2008 (CPA) of South Africa is no exception. Due to the nature of consumer disputes, alternative dispute resolution (in theory) is an effective vehicle for the adjudication of disputes in a timely manner avoiding overburdening of the courts. The CPA sets down as one of its core purposes the provision of ‘an accessible, consistent, harmonized, effective and efficient system of redress for consumers’ (section 3(1)(h) of the CPA). Section 69 of the Act provides for the enforcement of consumer rights and provides for the National Consumer Commission to be the Central Authority which streamlines, adjudicates and channels disputes to the appropriate forums which include Alternative Dispute Resolution Agents (ADR-agents). The purpose of this paper is to analyze the regulation of these enforcement and redress mechanisms with particular focus on the Central Authority as well as the ADR-agents and their crucial role in successful and efficient adjudication of disputes in South Africa. The South African position will be discussed comparatively with the European Union (EU) position. In this regard, the European Union (EU) Directive on Alternative Dispute Resolution for Consumer Disputes (2013/11/EU) will be discussed (The ADR Directive). The aim of the ADR Directive is to solve contractual disputes between consumers and traders (suppliers or businesses) regardless of whether the agreement was concluded offline or online or whether or not the trader is situated in another member state (Recitals 4-6). The ADR Directive provides for a set of quality requirements that an ADR body or entity tasked with resolving consumer disputes should adhere to in member states which include regulatory mechanisms for control. Transparency, effectiveness, fairness, liberty and legality are all requirements for a successful ADR body and discussed within this chapter III of the Directive. Chapters III and IV govern the importance of information and co-operation. This includes information between ADR bodies and the European Commission (EC) but also between ADR bodies or entities and national authorities enforcing legal acts on consumer protection and traders. (In South Africa the National Consumer Tribunal, Provincial Consumer Protectors and Industry ombuds come to mind). All of which have a responsibility to keep consumers informed. Ultimately the papers aims to provide recommendations as to the successfulness of the current South African position in light of the comparative position in Europe and the highlight the importance of proper regulation of these redress and enforcement institutions.

Keywords: alternative dispute resolution, consumer protection law, enforcement, redress

Procedia PDF Downloads 226
283 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 167
282 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 165
281 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 186
280 Mechanical Properties and Antibiotic Release Characteristics of Poly(methyl methacrylate)-based Bone Cement Formulated with Mesoporous Silica Nanoparticles

Authors: Kumaran Letchmanan, Shou-Cang Shen, Wai Kiong Ng

Abstract:

Postoperative implant-associated infections in soft tissues and bones remain a serious complication in orthopaedic surgery, which leads to impaired healing, re-implantation, prolong hospital stay and increase cost. Drug-loaded implants with sustained release of antibiotics at the local site are current research interest to reduce the risk of post-operative infections and osteomyelitis, thus, minimize the need for follow-up care and increase patient comfort. However, the improved drug release of the drug-loaded bone cements is usually accompanied by a loss in mechanical strength, which is critical for weight-bearing bone cement. Recently, more attempts have been undertaken to develop techniques to enhance the antibiotic elution as well as preserve the mechanical properties of the bone cements. The present study investigates the potential influence of addition of mesoporous silica nanoparticles (MSN) on the in vitro drug release kinetics of gentamicin (GTMC), along with the mechanical properties of bone cements. Simplex P was formulated with MSN and loaded with GTMC by direct impregnation. Meanwhile, Simplex P with water soluble poragen (xylitol) and high loading of GTMC as well as commercial bone cement CMW Smartset GHV were used as controls. MSN-formulated bone cements are able to increase the drug release of GTMC by 3-fold with a cumulative release of more than 46% as compared with other control groups. Furthermore, a sustained release could be achieved for two months. The loaded nano-sized MSN with uniform pore channels significantly build up an effective nano-network path in the bone cement facilitates the diffusion and extended release of GTMC. Compared with formulations using xylitol and high GTMC loading, incorporation of MSN shows no detrimental effect on biomechanical properties of the bone cements as no significant changes in the mechanical properties as compared with original bone cement. After drug release for two months, the bending modulus of MSN-formulated bone cements is 4.49 ± 0.75 GPa and the compression strength is 92.7 ± 2.1 MPa (similar to the compression strength of Simplex-P: 93.0 ± 1.2 MPa). The unaffected mechanical properties of MSN-formulated bone cements was due to the unchanged microstructures of bone cement, whereby more than 98% of MSN remains in the matrix and supports the bone cement structures. In contrast, the large portions of extra voids can be observed for the formulations using xylitol and high drug loading after the drug release study, thus caused compressive strength below the ASTM F541 and ISO 5833 minimum of 70 MPa. These results demonstrate the potential applicability of MSN-functionalized poly(methyl methacrylate)-based bone cement as a highly efficient, sustained and local drug delivery system with good mechanical properties.

Keywords: antibiotics, biomechanical properties, bone cement, sustained release

Procedia PDF Downloads 254
279 Glutamine Supplementation and Resistance Traning on Anthropometric Indices, Immunoglobulins, and Cortisol Levels

Authors: Alireza Barari, Saeed Shirali, Ahmad Abdi

Abstract:

Introduction: Exercise has contradictory effects on the immune system. Glutamine supplementation may increase the resistance of the immune system in athletes. The Glutamine is one of the most recognized immune nutrients that as a fuel source, substrate in the synthesis of nucleotides and amino acids and is also known to be part of the antioxidant defense. Several studies have shown that improving glutamine levels in plasma and tissues can have beneficial effects on the function of immune cells such as lymphocytes and neutrophils. This study aimed to investigate the effects of resistance training and training combined with glutamine supplementation to improve the levels of cortisol and immunoglobulin in untrained young men. The research shows that physical training can increase the cytokines in the athlete’s body of course; glutamine can counteract the negative effects of resistance training on immune function and stability of the mast cell membrane. Materials and methods: This semi-experimental study was conducted on 30 male non-athletes. They were randomly divided into three groups: control (no exercise), resistance training, resistance training and glutamine supplementation, respectively. Resistance training for 4 weeks and glutamine supplementation in 0.3 gr/kg/day after practice was applied. The resistance-training program consisted of eight exercises (leg press, lat pull, chest press, squat, seatedrow, abdominal crunch, shoulder press, biceps curl and triceps press down) four times per week. Participants performed 3 sets of 10 repetitions at 60–75% 1-RM. Anthropometry indexes (weight, body mass index, and body fat percentage), oxygen uptake (VO2max) Maximal, cortisol levels of immunoglobulins (IgA, IgG, IgM) were evaluated Pre- and post-test. Results: Results showed four week resistance training with and without glutamine cause significant increase in body weight, BMI and significantly decreased (P < 0/001) in BF. Vo2max also increased in both groups of exercise (P < 0/05) and exercise with glutamine (P < 0/001), such as in both groups significant reduction in IgG (P < 0/05) was observed. But no significant difference observed in levels of cortisol, IgA, IgM in any of the groups. No significant change observed in either parameter in the control group. No significant difference observed between the groups. Discussion: The alterations in the hormonal and immunological parameters can be used in order to assess the effect overload on the body, whether acute or chronically. The plasmatic concentration of glutamine has been associated to the functionality of the immunological system in individuals sub-mitted to intense physical training. resistance training has destructive effects on the immune system and glutamine supplementation cannot neutralize the damaging effects of power exercise on the immune system.

Keywords: glutamine, resistance traning, immuglobulins, cortisol

Procedia PDF Downloads 475
278 Study of Open Spaces in Urban Residential Clusters in India

Authors: Renuka G. Oka

Abstract:

From chowks to streets to verandahs to courtyards; residential open spaces are very significantly placed in traditional urban neighborhoods of India. At various levels of intersection, the open spaces with their attributes like juxtaposition with the built fabric, scale, climate sensitivity and response, multi-functionality, etc. reflect and respond to the patterns of human interactions. Also, these spaces tend to be quite well utilized. On the other hand, it is a common specter to see an imbalanced utilization of open spaces in newly/recently planned residential clusters. This is maybe due to lack of activity generators around or wrong locations or excess provisions or improper incorporation of aforementioned design attributes. These casual observations suggest the necessity for a systematic study of current residential open spaces. The exploratory study thus attempts to draw lessons through a structured inspection of residential open spaces to understand the effective environment as revealed through their use patterns. Here, residential open spaces are considered in a wider sense to incorporate all the un-built fabric around. These thus, include both use spaces and access space. For the study, open spaces in ten exemplary housing clusters/societies built during the last ten years across India are studied. A threefold inquiry is attempted in this direction. The first relates to identifying and determining the effects of various physical functions like space organization, size, hierarchy, thermal and optical comfort, etc. on the performance of residential open spaces. The second part sets out to understand socio-cultural variations in values, lifestyle, and beliefs which determine activity choices and behavioral preferences of users for respective residential open spaces. The third inquiry further observes the application of these research findings to the design process to derive meaningful and qualitative design advice. However, the study also emphasizes to develop a suitable framework of analysis and to carve out appropriate methods and approaches to probe into these aspects of the inquiry. Given this emphasis, a considerable portion of the research details out the conceptual framework for the study. This framework is supported by an in-depth search of available literature. The findings are worked out for design solutions which integrate the open space systems with the overall design process for residential clusters. The open spaces in residential areas present great complexities both in terms of their use patterns and determinants of their functional responses. The broad aim of the study is, therefore, to arrive at reconsideration of standards and qualitative parameters used by designers – on the basis of more substantial inquiry into the use patterns of open spaces in residential areas.

Keywords: open spaces, physical and social determinants, residential clusters, use patterns

Procedia PDF Downloads 147
277 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 73
276 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal

Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota

Abstract:

This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.

Keywords: temperature, precipitation, water discharge, water balance, global warming

Procedia PDF Downloads 341
275 Mixing Enhancement with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure Micromixer Using Different Mixing Fluids

Authors: Ayalew Yimam Ali

Abstract:

The T-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the T-junction microchannel can be difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The newly developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the T-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal, triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on the top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the T-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.

Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement.

Procedia PDF Downloads 10
274 Antenatal Monitoring of Pre-Eclampsia in a Low Resource Setting

Authors: Alina Rahim, Joanne Moffatt, Jessica Taylor, Joseph Hartland, Tamer Abdelrazik

Abstract:

Background: In 2011, 15% of maternal deaths in Uganda were due to hypertensive disorders (pre-eclampsia and eclampsia). The majority of these deaths are avoidable with optimum antenatal care. The aim of the study was to evaluate how antenatal monitoring of pre-eclampsia was carried out in a low resource setting and to identify barriers to best practice as recommended by the World Health Organisation (WHO) as part of a 4th year medical student External Student Selected component field trip. Method: Women admitted to hospital with pre-eclampsia in rural Uganda (Villa Maria and Kitovu Hospitals) over a year-long period were identified using the maternity register and antenatal record book. It was not possible to obtain notes for all cases identified on the maternity register. Therefore a total of thirty sets of notes were reviewed. The management was recorded and compared to Ugandan National Guidelines and WHO recommendations. Additional qualitative information on routine practice was established by interviewing staff members from the obstetric and midwifery teams. Results: From the records available, all patients in this sample were managed according to WHO recommendations during labour. The rate of Caesarean section as a mode of delivery was noted to be high in this group of patients; 56% at Villa Maria and 46% at Kitovu. Antenatally two WHO recommendations were not routinely met: aspirin prophylaxis and calcium supplementation. This was due to lack of resources, and lack of attendance at antenatal clinic leading to poor detection of high-risk patients. Medical management of pre-eclampsia varied between individual patients, overall 93.3% complied with Ugandan national guidelines. Two patients were treated with diuretics, which is against WHO guidance. Discussion: Antenatal monitoring of pre-eclampsia is important in reducing severe morbidity, long-term disability and mortality amongst mothers and their babies 2 . Poor attendance at antenatal clinic is a barrier to healthcare in low-income countries. Increasing awareness of the importance of these visits for women should be encouraged. The majority of cases reviewed in this sample of women were treated according to Ugandan National Guidelines. It is recommended to commence the use of aspirin prophylaxis for women at high-risk of developing pre-eclampsia and the creation of detailed guidelines for Uganda which would allow for standardisation of care county-wide.

Keywords: antenatal monitoring, low resource setting, pre-eclampsia, Uganda

Procedia PDF Downloads 226
273 Probing Mechanical Mechanism of Three-Hinge Formation on a Growing Brain: A Numerical and Experimental Study

Authors: Mir Jalil Razavi, Tianming Liu, Xianqiao Wang

Abstract:

Cortical folding, characterized by convex gyri and concave sulci, has an intrinsic relationship to the brain’s functional organization. Understanding the mechanism of the brain’s convoluted patterns can provide useful clues into normal and pathological brain function. During the development, the cerebral cortex experiences a noticeable expansion in volume and surface area accompanied by tremendous tissue folding which may be attributed to many possible factors. Despite decades of endeavors, the fundamental mechanism and key regulators of this crucial process remain incompletely understood. Therefore, to taking even a small role in unraveling of brain folding mystery, we present a mechanical model to find mechanism of 3-hinges formation in a growing brain that it has not been addressed before. A 3-hinge is defined as a gyral region where three gyral crests (hinge-lines) join. The reasons that how and why brain prefers to develop 3-hinges have not been answered very well. Therefore, we offer a theoretical and computational explanation to mechanism of 3-hinges formation in a growing brain and validate it by experimental observations. In theoretical approach, the dynamic behavior of brain tissue is examined and described with the aid of a large strain and nonlinear constitutive model. Derived constitute model is used in the computational model to define material behavior. Since the theoretical approach cannot predict the evolution of cortical complex convolution after instability, non-linear finite element models are employed to study the 3-hinges formation and secondary morphological folds of the developing brain. Three-dimensional (3D) finite element analyses on a multi-layer soft tissue model which mimics a small piece of the brain are performed to investigate the fundamental mechanism of consistent hinge formation in the cortical folding. Results show that after certain amount growth of cortex, mechanical model starts to be unstable and then by formation of creases enters to a new configuration with lower strain energy. By further growth of the model, formed shallow creases start to form convoluted patterns and then develop 3-hinge patterns. Simulation results related to 3-hinges in models show good agreement with experimental observations from macaque, chimpanzee and human brain images. These results have great potential to reveal fundamental principles of brain architecture and to produce a unified theoretical framework that convincingly explains the intrinsic relationship between cortical folding and 3-hinges formation. This achieved fundamental understanding of the intrinsic relationship between cortical folding and 3-hinges formation would potentially shed new insights into the diagnosis of many brain disorders such as schizophrenia, autism, lissencephaly and polymicrogyria.

Keywords: brain, cortical folding, finite element, three hinge

Procedia PDF Downloads 233
272 Collaboration versus Cooperation: Grassroots Activism in Divided Cities and Communication Networks

Authors: R. Barbour

Abstract:

Peace-building organisations act as a network of information for communities. Through fieldwork, it was highlighted that grassroots organisations and activists may cooperate with each other in their actions of peace-building; however, they would not collaborate. Within two divided societies; Nicosia in Cyprus and Jerusalem in Israel, there is a distinction made by organisations and activists with regards to activities being more ‘co-operative’ than ‘collaborative’. This theme became apparent when having informal conversations and semi-structured interviews with various members of the activist communities. This idea needs further exploration as these distinctions could impact upon the efficiency of peacebuilding activities within divided societies. Civil societies within divided landscapes, both physically and socially, play an important role in conflict resolution. How organisations and activists interact with each other has the possibility to be very influential with regards to peacebuilding activities. Working together sets a positive example for divided communities. Cooperation may be considered a primary level of interaction between CSOs. Therefore, at the beginning of a working relationship, organisations cooperate over basic agendas, parallel power structures and focus, which led to the same objective. Over time, in some instances, due to varying factors such as funding, more trust and understanding within the relationship, it could be seen that processes progressed to more collaborative ways. It is evident to see that NGOs and activist groups are highly independent and focus on their own agendas before coming together over shared issues. At this time, there appears to be more collaboration in Nicosia among CSOs and activists than Jerusalem. The aims and objectives of agendas also influence how organisations work together. In recent years, Nicosia, and Cyprus in general, have perhaps changed their focus from peace-building initiatives to more environmental issues which have become new-age reconciliation topics. Civil society does not automatically indicate like-minded organisations however solidarity within social groups can create ties that bring people and resources together. In unequal societies, such as those in Nicosia and Jerusalem, it is these ties that cut across groups and are essential for social cohesion. Societies are a collection of social groups; individuals who have come together over common beliefs. These groups in turn shape the identities and determine the values and structures within societies. At many different levels and stages, social groups work together through cooperation and collaboration. These structures in turn have the capabilities to open up networks to less powerful or excluded groups, with the aim to produce social cohesion which may contribute social stability and economic welfare over any extended period.

Keywords: collaboration, cooperation, grassroots activism, networks of communication

Procedia PDF Downloads 154
271 The Essential but Uncertain Role of the Vietnamese Association of Cities of Vietnam in Promoting Community-Based Housing Upgrading

Authors: T. Nguyen, H. Rennie, S. Vallance, M. Mackay

Abstract:

Municipal Associations, also called Unions, Leagues or Federations of municipalities have been established worldwide to represent the interests and needs of urban governments in the face of increasing urban issues. In 2008, the Association of Cities of Vietnam (ACVN) joined the Asian Coalition of Community Action Program (ACCA program) and introduced the community-based upgrading approach to help Vietnamese cities to address urban upgrading issues. While this community-based upgrading approach has only been implemented in a small number of Vietnamese cities and its replication has faced certain challenges, it is worthy to explore insights on how the Association of cities of Vietnam played its role in implementing some reportedly successful projects. This paper responds to this inquiry and presents results extracted from the author’s PhD study that sets out with a general objective to critically examine how social capital dimensions (i.e., bonding, bridging and linking) were formed, mobilized and maintained in a local collective and community-based upgrading process. Methodologically, the study utilized the given general categorization of bonding, bridging and linking capitals to explore and confirm how social capital operated in the real context of a community-based upgrading process, particularly in the context of Vietnam. To do this, the study conducted two exploratory and qualitative case studies of housing projects in Friendship neighbourhood (Vinh city) and Binh Dong neighbourhood (Tan An city). This paper presents the findings of the Friendship neighbourhood case study, focusing on the role of the Vietnamese municipal association in forming, mobilizing and maintaining bonding, bridging and linking capital for a community-based upgrading process. The findings highlight the essential but uncertain role of ACVN - the organization that has a hybrid legitimacy status - in such a process. The results improve our understanding both practically and theoretically. Practically, the results offer insights into the performance of a municipal association operating in a transitioning socio-political context of Vietnam. Theoretically, the paper questions the necessity of categorizing social capital dimensions (i.e., bonding, bridging and linking) by suggesting a holistic approach of looking at social capital for urban governance issues within the Vietnamese context and perhaps elsewhere.

Keywords: bonding capital, bridging capital, municipal association, linking capital, social capital, housing upgrading

Procedia PDF Downloads 146
270 Numerical Simulation on Two Components Particles Flow in Fluidized Bed

Authors: Wang Heng, Zhong Zhaoping, Guo Feihong, Wang Jia, Wang Xiaoyi

Abstract:

Flow of gas and particles in fluidized beds is complex and chaotic, which is difficult to measure and analyze by experiments. Some bed materials with bad fluidized performance always fluidize with fluidized medium. The material and the fluidized medium are different in many properties such as density, size and shape. These factors make the dynamic process more complex and the experiment research more limited. Numerical simulation is an efficient way to describe the process of gas-solid flow in fluidized bed. One of the most popular numerical simulation methods is CFD-DEM, i.e., computational fluid dynamics-discrete element method. The shapes of particles are always simplified as sphere in most researches. Although sphere-shaped particles make the calculation of particle uncomplicated, the effects of different shapes are disregarded. However, in practical applications, the two-component systems in fluidized bed also contain sphere particles and non-sphere particles. Therefore, it is needed to study the two component flow of sphere particles and non-sphere particles. In this paper, the flows of mixing were simulated as the flow of molding biomass particles and quartz in fluidized bad. The integrated model was built on an Eulerian–Lagrangian approach which was improved to suit the non-sphere particles. The constructed methods of cylinder-shaped particles were different when it came to different numerical methods. Each cylinder-shaped particle was constructed as an agglomerate of fictitious small particles in CFD part, which means the small fictitious particles gathered but not combined with each other. The diameter of a fictitious particle d_fic and its solid volume fraction inside a cylinder-shaped particle α_fic, which is called the fictitious volume fraction, are introduced to modify the drag coefficient β by introducing the volume fraction of the cylinder-shaped particles α_cld and sphere-shaped particles α_sph. In a computational cell, the void ε, can be expressed as ε=1-〖α_cld α〗_fic-α_sph. The Ergun equation and the Wen and Yu equation were used to calculate β. While in DEM method, cylinder-shaped particles were built by multi-sphere method, in which small sphere element merged with each other. Soft sphere model was using to get the connect force between particles. The total connect force of cylinder-shaped particle was calculated as the sum of the small sphere particles’ forces. The model (size=1×0.15×0.032 mm3) contained 420000 sphere-shaped particles (diameter=0.8 mm, density=1350 kg/m3) and 60 cylinder-shaped particles (diameter=10 mm, length=10 mm, density=2650 kg/m3). Each cylinder-shaped particle was constructed by 2072 small sphere-shaped particles (d=0.8 mm) in CFD mesh and 768 sphere-shaped particles (d=3 mm) in DEM mesh. The length of CFD and DEM cells are 1 mm and 2 mm. Superficial gas velocity was changed in different models as 1.0 m/s, 1.5 m/s, 2.0m/s. The results of simulation were compared with the experimental results. The movements of particles were regularly as fountain. The effect of superficial gas velocity on cylinder-shaped particles was stronger than that of sphere-shaped particles. The result proved this present work provided a effective approach to simulation the flow of two component particles.

Keywords: computational fluid dynamics, discrete element method, fluidized bed, multiphase flow

Procedia PDF Downloads 323
269 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 265