Search results for: large mammals
6191 Modelling of Pipe Jacked Twin Tunnels in a Very Soft Clay
Authors: Hojjat Mohammadi, Randall Divito, Gary J. E. Kramer
Abstract:
Tunnelling and pipe jacking in very soft soils (fat clays), even with an Earth Pressure Balance tunnel boring machine (EPBM), can cause large ground displacements. In this study, the short-term and long-term ground and tunnel response is predicted for twin, pipe-jacked EPBM 3 meter diameter tunnels with a narrow pillar width. Initial modelling indicated complete closure of the annulus gap at the tail shield onto the centrifugally cast, glass-fiber-reinforced, polymer mortar jacking pipe (FRP). Numerical modelling was employed to simulate the excavation and support installation sequence, examine the ground response during excavation, confirm the adequacy of the pillar width and check the structural adequacy of the installed pipe. In the numerical models, Mohr-Coulomb constitutive model with the effect of unloading was adopted for the fat clays, while for the bedrock layer, the generalized Hoek-Brown was employed. The numerical models considered explicit excavation sequences and different levels of ground convergence prior to support installation. The well-studied excavation sequences made the analysis possible for this study on a very soft clay, otherwise, obtaining the convergency in the numerical analysis would be impossible. The predicted results indicate that the ground displacements around the tunnel and its effect on the pipe would be acceptable despite predictions of large zones of plastic behaviour around the tunnels and within the entire pillar between them due to excavation-induced ground movements.Keywords: finite element modeling (FEM), pipe-jacked tunneling, very soft clay, EPBM
Procedia PDF Downloads 806190 High Aspect Ratio Sio2 Capillary Based On Silicon Etching and Thermal Oxidation Process for Optical Modulator
Authors: Nguyen Van Toan, Suguru Sangu, Tetsuro Saito, Naoki Inomata, Takahito Ono
Abstract:
This paper presents the design and fabrication of an optical window for an optical modulator toward image sensing applications. An optical window consists of micrometer-order SiO2 capillaries (porous solid) that can modulate transmission light intensity by moving the liquid in and out of porous solid. A high optical transmittance of the optical window can be achieved due to refractive index matching when the liquid is penetrated into the porous solid. Otherwise, its light transmittance is lower because of light reflection and scattering by air holes and capillary walls. Silicon capillaries fabricated by deep reactive ion etching (DRIE) process are completely oxidized to form the SiO2 capillaries. Therefore, high aspect ratio SiO2 capillaries can be achieved based on silicon capillaries formed by DRIE technique. Large compressive stress of the oxide causes bending of the capillary structure, which is reduced by optimizing the design of device structure. The large stress of the optical window can be released via thin supporting beams. A 7.2 mm x 9.6 mm optical window area toward a fully integrated with the image sensor format is successfully fabricated and its optical transmittance is evaluated with and without inserting liquids (ethanol and matching oil). The achieved modulation range is approximately 20% to 35% with and without liquid penetration in visible region (wavelength range from 450 nm to 650 nm).Keywords: thermal oxidation process, SiO2 capillaries, optical window, light transmittance, image sensor, liquid penetration
Procedia PDF Downloads 4896189 Assessing Suitability and Acceptability of Development Plans and Town Planning Scheme in Small and Medium Town: A Case of Gujarat
Authors: Priyanshu Sharma
Abstract:
Urban development mechanism has evolved over the years in India, and various planning models and tools have been adopted by different states. Large cities have been able to make and implement plans with the varied degree. However, it has been observed these mechanisms face challenges to gain the momentum in small and medium towns. Gujarat has a very robust legislation that empowers planning authorities to prepare development plans (DP) and town planning scheme (TPS). The DP- TPS planning methods are quite popular for large cities in Gujarat. However, it has been observed that in the smaller towns these methods of plan preparation are facing severe agitations. Recently, development authorities of many small towns like Himmatnagar, Nadiad, and Junagadh, etc. have faced serious protest from local residents. This is because of the large amount of land deduction under the provisions of DP and TPS. And this number of opposition has been increasing since 2012 in Gujarat. This study aims to understand in detail the reasons for agitation against the plans prepared by smaller towns. It will further try to see whether the current framework of urban planning (DP and TPS) are really suitable for these towns. After understanding the development concerns and background, the aim and objectives of the study were outlined: Aim: To evaluate the suitability and acceptability of the current urban development mechanism for the small and medium towns. Objectives: (i) To review the GTPUD Act and identify the provision related to small and medium towns (ii) To understand preparation process of development plan and town planning scheme and issues related to it (iii) To understand the issues raised by the different stakeholder w.r.t plan because of which the plan and authority was agitated (iv) To find out the possible option through which these plans can be made suitable and acceptable to the stakeholder. The approach of this study is more qualitative based with the intention to understand the time frame process of preparation of development plan and town planning scheme and issues related to it. On the basis of literature study, the three towns were selected, and the detailed questionnaire was prepared for the stakeholders (development authorities and local residents) which include the time process taken in the preparation of DP and TPS and what were issues faced during the process and who all were involved. Lastly, the study looks into aspects of the land value of original plots and readjusted plots by concluding the argument whether this TP scheme model really worked in small and medium towns. Because the land deduction under TP scheme is allowed up to 50% as per the act and there is no distinct provision for small and medium towns under the act, so how this could be justified to smaller towns where the market value have not changed over the years. After analyzing the issues and reason behind the agitation against the DP and TPS in these small and medium towns. The broader recommendation has been given which can make these plans acceptable and suitable for the stakeholder.Keywords: development plans, medium towns, small towns, town planning schemes
Procedia PDF Downloads 1556188 Injection of Bradykinin in Femoral Artery Elicits Cardiorespiratory Reflexes Involving Perivascular Afferents in Rat Models
Authors: Sanjeev K. Singh, Maloy B. Mandal, Revand R.
Abstract:
The physiology of baroreceptors and chemoreceptors present in large blood vessels of the heart is well known in regulation of cardiorespiratory functions. Since large blood vessels and peripheral blood vessels are of same mesodermal origin, therefore, involvement of the latter in regulation of cardiorespiratory system is expected. Role of perivascular nerves in mediating cardiorespiratory alterations produced after intra-arterial injection of a nociceptive agent (bradykinin) was examined in urethane anesthetized male rats. Respiratory frequency, blood pressure, and heart rate were recorded for 30 min after the retrograde injection of bradykinin/saline in the femoral artery. In addition, paw edema was determined and water content was expressed as percentage of wet weight. Injection of bradykinin produced immediate tachypnoeic, hypotensive and bradycardiac responses of shorter latency (5-8 s) favoring the neural mechanisms involved in it. Injection of equi-volume of saline did not produce any responses and served as time matched control. Paw edema was observed in the ipsilateral hind limb. Pretreatment with diclofenac sodium significantly attenuated the bradykinin-induced responses and also blocked the paw edema. Ipsilateral femoral and sciatic nerve sectioning attenuated bradykinin-induced responses significantly indicating the origin of responses from the local vascular bed. Administration of bradykinin in the segment of an artery produced reflex cardiorespiratory changes by stimulating the perivascular nociceptors involving prostaglandins. This is a novel study exhibiting the role of peripheral blood vessels in regulation of cardiorespiratory system.Keywords: vasosensory reflex, cardiorespiratory changes, nociceptive agent, bradykinin, VR1 receptors
Procedia PDF Downloads 1456187 Evaluation of Current Methods in Modelling and Analysis of Track with Jointed Rails
Authors: Hossein Askarinejad, Manicka Dhanasekar
Abstract:
In railway tracks, two adjacent rails are either welded or connected using bolted jointbars. In recent years the number of bolted rail joints is reduced by introduction of longer rail sections and by welding the rails at location of some joints. However, significant number of bolted rail joints remains in railways around the world as they are required to allow for rail thermal expansion or to provide electrical insulation in some sections of track. Regardless of the quality and integrity of the jointbar and bolt connections, the bending stiffness of jointbars is much lower than the rail generating large deflections under the train wheels. In addition, the gap or surface discontinuity on the rail running surface leads to generation of high wheel-rail impact force at the joint gap. These fundamental weaknesses have caused high rate of failure in track components at location of rail joints resulting in significant economic and safety issues in railways. The mechanical behavior of railway track at location of joints has not been fully understood due to various structural and material complexities. Although there have been some improvements in the methods for analysis of track at jointed rails in recent years, there are still uncertainties concerning the accuracy and reliability of the current methods. In this paper the current methods in analysis of track with a rail joint are critically evaluated and the new advances and recent research outcomes in this area are discussed. This research is part of a large granted project on rail joints which was defined by Cooperative Research Centre (CRC) for Rail Innovation with supports from Australian Rail Track Corporation (ARTC) and Queensland Rail (QR).Keywords: jointed rails, railway mechanics, track dynamics, wheel-rail interaction
Procedia PDF Downloads 3486186 Brazil's Olympian Tragedy: Searching for Citizenship in Vila Autodromo
Authors: Rachel K. Cremona
Abstract:
Forty years ago, Vila Autodromo was a small fishing settlement in southwest Rio de Janeiro. By 2012, Vila Autodromo had established itself into a working class neighborhood – certainly not a slum, but nonetheless designated as a ‘favela’ as a consequence of its history as an illegal settlement that was thus never provided with public services. Vila Autodromo sits on a large lagoon, adjacent to the Olympic Park being constructed for the 2016 Olympic Games to be held in Rio, and looks out over the expensive high rise condominiums that have sprouted across the water. In 2009, when Rio submitted their bid for the Olympic games, there were approximately 900 families that called Vila Autodromo home, and the original plans for the games clearly show their homes remaining in place. Today, only a handful of these homes remain. This paper will utilize the case study of Vila Autodromo to examine the broader issue of Favelas in 21st century Rio de Janeiro. While race and poverty have become synonymous with Brazil’s inegalitarian social order – and personified by the thousands of favelas scattered in and around large cities like Rio and Sao Paulo – much less attention has been given to the political status of the nation’s invisible majority. In particular, this research will examine the question of citizenship and argue that the most fundamental problem of inequality in Brazil is not simply a product of history, race and social order, but more specifically a problem of ‘personhood’. The political marginalization of Brazil’s poor does not simply reinforce their social marginalization, it institutionalizes it in a way that makes it almost impossible to escape. The story of Vila Autodromo captures this problem in a way that not only illustrates the clear (though ambiguous) role of the state in the perpetuation of Brazil’s underclass, but also the human resilience that it has fostered.Keywords: citizenship, poverty, displacement, favela
Procedia PDF Downloads 4356185 Roll Forming Process and Die Design for a Large Size Square Tube
Authors: Jinn-Jong Sheu, Cang-Fu Liang, Cheng-Hsien Yu
Abstract:
This paper proposed the cold roll forming process and the die design methods for a 400mm by 400 mm square tube with 16 mm in thickness. The tubular blank made by cold roll forming is 508mm in diameter. The square tube roll forming process was designed considering the layout of rolls and the compression ratio distribution for each stand. The final tube corner radius and the edge straightness in the front end of the tube are to be controlled according to the tube specification. A five-stand forming design using four rolls at each stand was proposed to establish the base reference of square tube roll forming quality. Different numbers of pass and roll designs were proposed and compared to the base design in order to find the feasibility of increase pass number to improve the square tube quality. The proposed roll forming processes were simulated using FEM analysis. The thickness variations of the corner and the edge areas were examined. The maximum loads and the torques of each stand were calculated to study the power consumption of the roll forming machine. The simulation results showed the square tube thickness variations and concavity of the edge are acceptable with the JIS tube specifications for the base design. But the maximum loads and torques are very high. By changing the layout and the number of the rolls were able to obtain better tube geometry and decrease the maximum load and torque of each stand. This paper had shown the feasibility of designing the roll forming process and the layout of dies using FEM simulation. The obtained information is helpful to the roll forming machine design for a large size square tube making.Keywords: cold roll forming, FEM analysis, roll forming die design, tube roll forming
Procedia PDF Downloads 3096184 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 1396183 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing
Authors: Leonie Bradfield, Stephen Fityus, John Simmons
Abstract:
The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump
Procedia PDF Downloads 1596182 A Study on Accident Result Contribution of Individual Major Variables Using Multi-Body System of Accident Reconstruction Program
Authors: Donghun Jeong, Somyoung Shin, Yeoil Yun
Abstract:
A large-scale traffic accident refers to an accident in which more than three people die or more than thirty people are dead or injured. In order to prevent a large-scale traffic accident from causing a big loss of lives or establish effective improvement measures, it is important to analyze accident situations in-depth and understand the effects of major accident variables on an accident. This study aims to analyze the contribution of individual accident variables to accident results, based on the accurate reconstruction of traffic accidents using PC-Crash’s Multi-Body, which is an accident reconstruction program, and simulation of each scenario. Multi-Body system of PC-Crash accident reconstruction program is used for multi-body accident reconstruction that shows motions in diverse directions that were not approached previously. MB System is to design and reproduce a form of body, which shows realistic motions, using several bodies. Targeting the 'freight truck cargo drop accident around the Changwon Tunnel' that happened in November 2017, this study conducted a simulation of the freight truck cargo drop accident and analyzed the contribution of individual accident majors. Then on the basis of the driving speed, cargo load, and stacking method, six scenarios were devised. The simulation analysis result displayed that the freight car was driven at a speed of 118km/h(speed limit: 70km/h) right before the accident, carried 196 oil containers with a weight of 7,880kg (maximum load: 4,600kg) and was not fully equipped with anchoring equipment that could prevent a drop of cargo. The vehicle speed, cargo load, and cargo anchoring equipment were major accident variables, and the accident contribution analysis results of individual variables are as follows. When the freight car only obeyed the speed limit, the scattering distance of oil containers decreased by 15%, and the number of dropped oil containers decreased by 39%. When the freight car only obeyed the cargo load, the scattering distance of oil containers decreased by 5%, and the number of dropped oil containers decreased by 34%. When the freight car obeyed both the speed limit and cargo load, the scattering distance of oil containers fell by 38%, and the number of dropped oil containers fell by 64%. The analysis result of each scenario revealed that the overspeed and excessive cargo load of the freight car contributed to the dispersion of accident damage; in the case of a truck, which did not allow a fall of cargo, there was a different type of accident when driven too fast and carrying excessive cargo load, and when the freight car obeyed the speed limit and cargo load, there was the lowest possibility of causing an accident.Keywords: accident reconstruction, large-scale traffic accident, PC-Crash, MB system
Procedia PDF Downloads 1986181 A Clinical Study of Tracheobronchopathia Osteochondroplastica: Findings from a Large Chinese Cohort
Authors: Ying Zhu, Ning Wu, Hai-Dong Huang, Yu-Chao Dong, Qin-Ying Sun, Wei Zhang, Qin Wang, Qiang Li
Abstract:
Background and study aims: Tracheobronchopathia osteochondroplastica (TO) is an uncommon disease of the tracheobronchial system that leads to narrowing of the airway lumen from cartilaginous and/or osseous submucosal nodules. The aim of this study is to perform a detailed review of this rare disease in a large cohort of patients with TO proven by fiberoptic bronchoscopy from China. Patients and Methods: Retrospective chart review was performed on 41,600 patients who underwent bronchoscopy in the Department of Respiratory Medicine of Changhai Hospital between January 2005 and December 2012. Cases of TO were identified based on characteristic features during bronchoscopic examination. Results: 22 cases of bronchoscopic TO were identified. Among whom one-half were male and the mean age was 47.45 ±10.91 years old. The most frequent symptoms at presentation were chronic cough (n=14) and increased sputum production (n=10). Radiographic abnormalities were observed in 3/18 patients and findings on computed tomography consistent with TO such as beaded intraluminal calcifications and/or increased luminal thickenings were observed in 18/22 patients. Patients were classified into the following categories based on the severity of bronchoscopic findings: Stage I (n=2), Stage II (n=6) and Stage III(n=14). The result that bronchoscopic improvement was observed in 2 patients administered with inhaled corticosteroids suggested that resolution of this disease is possible. Conclusions: TO is a benign disease with slow progression, which could be roughly divided into 3 stages on the basis of the characteristic endoscopic features and histopathologic findings. Chronic inflammation was thought to be more important than the other existing plausible hypotheses in the course of TO. Inhaled corticosteroids might have some impact on patients at Stage I/II.Keywords: airway obstruction, bronchoscopy, etiology, Tracheobronchopathia osteochondroplastica (TO), treatment
Procedia PDF Downloads 4626180 Influence of Genetic Counseling in Family Dynamics in Patients with Deafness in Merida, Yucatán, Mexico
Authors: Damaris Estrella Castillo, Zacil ha Vilchis Zapata, Leydi Peraza Gómez
Abstract:
Hearing loss is an etiologically heterogeneous condition, where almost 60% is genetic in origin, 20% is due to environmental factors, and 20% have unknown causes. However, it is now known that the gene, GJB2, which encodes the connexin 26 protein, accounts for a large percentage of non-syndromic genetic hearing loss, and variants in this gene have been identified to be a common cause of hereditary hearing loss in many populations. The literature reports that the etiology in deafness helps improve family functioning but low-income countries this is difficult. Therefore, it is difficult to contribute the right of families to know about the genetic risk in future pregnancies as well as determining the certainty of being a carrier or affected. In order to assess the impact of genetic counseling and the functionality, 100 families with at least one child with profound hearing loss, were evaluated by specialists in audiology, clinical genetics and psychology. Targeted mutation analysis for one of the two known large deletions of upstream of GJB2/GJB6 gene (35delG; and including GJB2 regulatory sequences and GJB6) were performed in patients with diagnosis of non-syndromic hearing loss. Genetic counseling was given to all parents and primary caregivers, and APGAR family test was applied before and after the counseling. We analyzed a total of 300 members (children, parents) to determine the presence of the GJB2 gene mutation. Twelve patients (carriers and affected) were positive for the mutation, from 5 different families. The subsequent family APGAR testing and genetic counseling, showed that 14% perceived their families as functional, 62 % and 24 % moderately functional dysfunctional. This shows the importance of genetic counseling in the perception of family function that can directly impact the quality of life of these families.Keywords: family dynamics, deafness, APGAR, counseling
Procedia PDF Downloads 6426179 Microwave Single Photon Source Using Landau-Zener Transitions
Authors: Siddhi Khaire, Samarth Hawaldar, Baladitya Suri
Abstract:
As efforts towards quantum communication advance, the need for single photon sources becomes imminent. Due to the extremely low energy of a single microwave photon, efforts to build single photon sources and detectors in the microwave range are relatively recent. We plan to use a Cooper Pair Box (CPB) that has a ‘sweet-spot’ where the two energy levels have minimal separation. Moreover, these qubits have fairly large anharmonicity making them close to ideal two-level systems. If the external gate voltage of these qubits is varied rapidly while passing through the sweet-spot, due to Landau-Zener effect, the qubit can be excited almost deterministically. The rapid change of the gate control voltage through the sweet spot induces a non-adiabatic population transfer from the ground to the excited state. The qubit eventually decays into the emission line emitting a single photon. The advantage of this setup is that the qubit can be excited without any coherent microwave excitation, thereby effectively increasing the usable source efficiency due to the absence of control pulse microwave photons. Since the probability of a Landau-Zener transition can be made almost close to unity by the appropriate design of parameters, this source behaves as an on-demand source of single microwave photons. The large anharmonicity of the CPB also ensures that only one excited state is involved in the transition and multiple photon output is highly improbable. Such a system has so far not been implemented and would find many applications in the areas of quantum optics, quantum computation as well as quantum communication.Keywords: quantum computing, quantum communication, quantum optics, superconducting qubits, flux qubit, charge qubit, microwave single photon source, quantum information processing
Procedia PDF Downloads 966178 A Research on the Improvement of Small and Medium-Sized City in Early-Modern China (1895-1927): Taking Southern Jiangsu as an Example
Authors: Xiaoqiang Fu, Baihao Li
Abstract:
In 1895, the failure of Sino-Japanese prompted the trend of comprehensive and systematic study of western pattern in China. In urban planning and construction, urban reform movement sprang up slowly, which aimed at renovating and reconstructing the traditional cities into modern cities similar to the concessions. During the movement, Chinese traditional city initiated a process of modern urban planning for its modernization. Meanwhile, the traditional planning morphology and system started to disintegrate, on the contrary, western form and technology had become the paradigm. Therefore, the improvement of existing cities had become the prototype of urban planning of early modern China. Currently, researches of the movement mainly concentrate on large cities, concessions, railway hub cities and some special cities resembling those. However, the systematic research about the large number of traditional small and medium-sized cities is still blank, up to now. This paper takes the improvement constructions of small and medium-sized cities in Southern region of Jiangsu Province as the research object. First of all, the criteria of small and medium-sized cities are based on the administrative levels of general office and cities at the county level. Secondly, the suitability of taking the Southern Jiangsu as the research object. The southern area of Jiangsu province called Southern Jiangsu for short, was the most economically developed region in Jiangsu, and also one of the most economically developed and the highest urbanization regions in China. As the most developed agricultural areas in ancient China, Southern Jiangsu formed a large number of traditional small and medium-sized cities. In early modern times, with the help of the Shanghai economic radiation, geographical advantage and powerful economic foundation, Southern Jiangsu became an important birthplace of Chinese national industry. Furthermore, the strong business atmosphere promoted the widespread urban improvement practices, which were incomparable of other regions. Meanwhile, the demonstration of Shanghai, Zhenjiang, Suzhou and other port cities became the improvement pattern of small and medium-sized city in Southern Jiangsu. This paper analyzes the reform movement of the small and medium-sized cities in Southern Jiangsu (1895-1927), including the subjects, objects, laws, technologies and the influence factors of politic and society, etc. At last, this paper reveals the formation mechanism and characteristics of urban improvement movement in early modern China. According to the paper, the improvement of small-medium city was a kind of gestation of the local city planning culture in early modern China,with a fusion of introduction and endophytism.Keywords: early modern China, improvement of small-medium city, southern region of Jiangsu province, urban planning history of China
Procedia PDF Downloads 2596177 Socio-Economic and Psychological Factors of Moscow Population Deviant Behavior: Sociological and Statistical Research
Authors: V. Bezverbny
Abstract:
The actuality of the project deals with stable growing of deviant behavior’ statistics among Moscow citizens. During the recent years the socioeconomic health, wealth and life expectation of Moscow residents is regularly growing up, but the limits of crime and drug addiction have grown up seriously. Another serious Moscow problem has been economical stratification of population. The cost of identical residential areas differs at 2.5 times. The project is aimed at complex research and the development of methodology for main factors and reasons evaluation of deviant behavior growing in Moscow. The main project objective is finding out the links between the urban environment quality and dynamics of citizens’ deviant behavior in regional and municipal aspect using the statistical research methods and GIS modeling. The conducted research allowed: 1) to evaluate the dynamics of deviant behavior in Moscow different administrative districts; 2) to describe the reasons of crime increasing, drugs addiction, alcoholism, suicides tendencies among the city population; 3) to develop the city districts classification based on the level of the crime rate; 4) to create the statistical database containing the main indicators of Moscow population deviant behavior in 2010-2015 including information regarding crime level, alcoholism, drug addiction, suicides; 5) to present statistical indicators that characterize the dynamics of Moscow population deviant behavior in condition of expanding the city territory; 6) to analyze the main sociological theories and factors of deviant behavior for concretization the deviation types; 7) to consider the main theoretical statements of the city sociology devoted to the reasons for deviant behavior in megalopolis conditions. To explore the level of deviant behavior’ factors differentiation, the questionnaire was worked out, and sociological survey involved more than 1000 people from different districts of the city was conducted. Sociological survey allowed to study the socio-economical and psychological factors of deviant behavior. It also included the Moscow residents’ open-ended answers regarding the most actual problems in their districts and reasons of wish to leave their place. The results of sociological survey lead to the conclusion that the main factors of deviant behavior in Moscow are high level of social inequality, large number of illegal migrants and bums, nearness of large transport hubs and stations on the territory, ineffective work of police, alcohol availability and drug accessibility, low level of psychological comfort for Moscow citizens, large number of building projects.Keywords: deviant behavior, megapolis, Moscow, urban environment, social stratification
Procedia PDF Downloads 1916176 Validation of an Acuity Measurement Tool for Maternity Services
Authors: Cherrie Lowe
Abstract:
The TrendCare Patient Dependency System is currently utilized by a large number of Maternity Services across Australia, New Zealand and Singapore. In 2012, 2013, and 2014 validation studies were initiated in all three countries to validate the acuity tools used for Women in Labour, and Postnatal Mothers and Babies. This paper will present the findings of the validation study. Aim: The aim of this study was to; Identify if the care hours provided by the TrendCare Acuity System was an accurate reflection of the care required by Women and Babies. Obtain evidence of changes required to acuity indicators and/or category timings to ensure the TrendCare acuity system remains reliable and valid across a range of Maternity care models in three countries. Method: A non-experimental action research methodology was used across four District Health Boards in New Zealand, two large public Australian Maternity services and a large tertiary Maternity service in Singapore. Standardized data collection forms and timing devices were used to collect Midwife contact times with Women and Babies included in the study. Rejection processes excluded samples where care was not completed/rationed. The variances between actual timed Midwife/Mother/Baby contact and actual Trend Care acuity times were identified and investigated. Results: 87.5% (18) of TrendCare acuity category timings matched the actual timings recorded for Midwifery care. 12.5% (3) of TrendCare night duty categories provided less minutes of care than the actual timings. 100% of Labour Ward TrendCare categories matched actual timings for Midwifery care. The actual times given for assistance to New Zealand independent Midwives in Labour Ward showed a significant deviation to previous studies demonstrating the need for additional time allocations in Trend Care. Conclusion: The results demonstrated the importance of regularly validating the Trend Care category timings with the care hours required, as variances to models of care and length of stay in Maternity units have increased Midwifery workloads on the night shift. The level of assistance provided by the core labour ward staff to the Independent Midwife has increased substantially. Outcomes: As a consequence of this study changes were made to the night duty TrendCare Maternity categories, additional acuity indicators developed and times for assisting independent Midwives increased. The updated TrendCare version was delivered to Maternity services in 2014.Keywords: maternity, acuity, research, nursing workloads
Procedia PDF Downloads 3776175 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets
Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe
Abstract:
Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.Keywords: biomedical research, genomics, information systems, software
Procedia PDF Downloads 2696174 In Situ Volume Imaging of Cleared Mice Seminiferous Tubules Opens New Window to Study Spermatogenic Process in 3D
Authors: Lukas Ded
Abstract:
Studying the tissue structure and histogenesis in the natural, 3D context is challenging but highly beneficial process. Contrary to classical approach of the physical tissue sectioning and subsequent imaging, it enables to study the relationships of individual cellular and histological structures in their native context. Recent developments in the tissue clearing approaches and microscopic volume imaging/data processing enable the application of these methods also in the areas of developmental and reproductive biology. Here, using the CLARITY tissue procedure and 3D confocal volume imaging we optimized the protocol for clearing, staining and imaging of the mice seminiferous tubules isolated from the testes without cardiac perfusion procedure. Our approach enables the high magnification and fine resolution axial imaging of the whole diameter of the seminiferous tubules with possible unlimited lateral length imaging. Hence, the large continuous pieces of the seminiferous tubule can be scanned and digitally reconstructed for the study of the single tubule seminiferous stages using nuclear dyes. Furthermore, the application of the antibodies and various molecular dyes can be used for molecular labeling of individual cellular and subcellular structures and resulting 3D images can highly increase our understanding of the spatiotemporal aspects of the seminiferous tubules development and sperm ultrastructure formation. Finally, our newly developed algorithms for 3D data processing enable the massive parallel processing of the large amount of individual cell and tissue fluorescent signatures and building the robust spermatogenic models under physiological and pathological conditions.Keywords: CLARITY, spermatogenesis, testis, tissue clearing, volume imaging
Procedia PDF Downloads 1356173 A Descriptive Study on Micro Living and Its Importance over Large Houses by Understanding Various Scenarios and Case Studies
Authors: Belal Neazi
Abstract:
'Larger Houses Consume More Resources’ – both in construction and during operation. The most important aspect of smaller homes is that it uses less electricity and fuel for construction and maintenance. Here, an urban interpretation of the contemporary minimal existence movement is explained. In an attempt to restrict urban decay and to encourage inner-city renewal, the Tiny House principles are interpreted as alternative ways of dwelling in urban neighbourhoods. These tiny houses are usually pretty different from each other in interior planning, but almost similar in size. The disadvantage of large homes came up when people were asked to vacate as they were not able to pay the massive amount of mortgages. This made them reconsider their housing situation and discover the ideas of minimalism and the general rising inclination in environmental awareness that serve as the basis for the tiny house movement. One of the largest benefits of inhabiting a tiny house is the decrease in carbon footprint. Also, to increase social behaviour and freedom. It’s better for the environmental concern, financial concerns, and desire for more time and freedom. Examples of the tiny house village which are sustaining homeless population and the use of different reclaimed materials for the construction of these tiny houses are explained in the paper. It is proposed in the paper, that these houses will reflect the diversity while proposing an alternative model for the rehabilitation of decaying row-homes and the renewal of fading communities. The core objective is to design small or micro spaces for the economically backward people of the place and increase their social behaviour and freedom. Also, it’s better for the environmental concern, financial concerns, and desire for more time and freedom.Keywords: city renewal, environmental concern, micro-living, tiny house
Procedia PDF Downloads 1826172 Large-scale GWAS Investigating Genetic Contributions to Queerness Will Decrease Stigma Against LGBTQ+ Communities
Authors: Paul J. McKay
Abstract:
Large-scale genome-wide association studies (GWAS) investigating genetic contributions to sexual orientation and gender identity are largely lacking and may reduce stigma experienced in the LGBTQ+ community by providing an underlying biological explanation for queerness. While there is a growing consensus within the scientific community that genetic makeup contributes – at least in part – to sexual orientation and gender identity, there is a marked lack of genomics research exploring polygenic contributions to queerness. Based on recent (2019) findings from a large-scale GWAS investigating the genetic architecture of same-sex sexual behavior, and various additional peer-reviewed publications detailing novel insights into the molecular mechanisms of sexual orientation and gender identity, we hypothesize that sexual orientation and gender identity are complex, multifactorial, and polygenic; meaning that many genetic factors contribute to these phenomena, and environmental factors play a possible role through epigenetic modulation. In recent years, large-scale GWAS studies have been paramount to our modern understanding of many other complex human traits, such as in the case of autism spectrum disorder (ASD). Despite possible benefits of such research, including reduced stigma towards queer people, improved outcomes for LGBTQ+ in familial, socio-cultural, and political contexts, and improved access to healthcare (particularly for trans populations); important risks and considerations remain surrounding this type of research. To mitigate possibilities such as invalidation of the queer identities of existing LGBTQ+ individuals, genetic discrimination, or the possibility of euthanasia of embryos with a genetic predisposition to queerness (through reproductive technologies like IVF and/or gene-editing in utero), we propose a community-engaged research (CER) framework which emphasizes the privacy and confidentiality of research participants. Importantly, the historical legacy of scientific research attempting to pathologize queerness (in particular, falsely equating gender variance to mental illness) must be acknowledged to ensure any future research conducted in this realm does not propagate notions of homophobia, transphobia or stigma against queer people. Ultimately, in a world where same-sex sexual activity is criminalized in 69 UN member states, with 67 of these states imposing imprisonment, 8 imposing public flogging, 6 (Brunei, Iran, Mauritania, Nigeria, Saudi Arabia, Yemen) invoking the death penalty, and another 5 (Afghanistan, Pakistan, Qatar, Somalia, United Arab Emirates) possibly invoking the death penalty, the importance of this research cannot be understated, as finding a biological basis for queerness would directly oppose the harmful rhetoric that “being LGBTQ+ is a choice.” Anti-trans legislation is similarly widespread: In the United States in 2022 alone (as of Oct. 13), 155 anti-trans bills have been introduced preventing trans girls and women from playing on female sports teams, barring trans youth from using bathrooms and locker rooms that align with their gender identity, banning access to gender affirming medical care (e.g., hormone-replacement therapy, gender-affirming surgeries), and imposing legal restrictions on name changes. Understanding that a general lack of knowledge about the biological basis of queerness may be a contributing factor to the societal stigma faced by gender and sexual orientation minorities, we propose the initiation of large-scale GWAS studies investigating the genetic basis of gender identity and sexual orientation.Keywords: genome-wide association studies (GWAS), sexual and gender minorities (SGM), polygenicity, community-engaged research (CER)
Procedia PDF Downloads 696171 Analyzing the Performance of the Philippine Disaster Risk Reduction and Management Act of 2010 as Framework for Managing and Recovering from Large-Scale Disasters: A Typhoon Haiyan Recovery Case Study
Authors: Fouad M. Bendimerad, Jerome B. Zayas, Michael Adrian T. Padilla
Abstract:
With the increasing scale of severity and frequency of disasters worldwide, the performance of governance systems for disaster risk reduction and management in many countries are being put to the test. In the Philippines, the Disaster Risk Reduction and Management (DRRM) Act of 2010 (Republic Act 10121 or RA 10121) as the framework for disaster risk reduction and management was tested when Super Typhoon Haiyan hit the eastern provinces of the Philippines in November 2013. Typhoon Haiyan is considered to be the strongest recorded typhoon in history to make landfall with winds exceeding 252 km/hr. In assessing the performance of RA 10121 the authors conducted document reviews of related policies, plans, programs, and key interviews and focus groups with representatives of 21 national government departments, two (2) local government units, six (6) private sector and civil society organizations, and five (5) development agencies. Our analysis will argue that enhancements are needed in RA 10121 in order to meet the challenges of large-scale disasters. The current structure where government agencies and departments organize along DRRM thematic areas such response and relief, preparedness, prevention and mitigation, and recovery and response proved to be inefficient in coordinating response and recovery and in mobilizing resources on the ground. However, experience from various disasters has shown the Philippine government’s tendency to organize major recovery programs along development sectors such as infrastructure, livelihood, shelter, and social services, which is consistent with the concept of DRM mainstreaming. We will argue that this sectoral approach is more effective than the thematic approach to DRRM. The council-type arrangement for coordination has also been rendered inoperable by Typhoon Haiyan because the agency responsible for coordination does not have decision-making authority to mobilize action and resources of other agencies which are members of the council. Resources have been devolved to agencies responsible for each thematic area and there is no clear command and direction structure for decision-making. However, experience also shows that the Philippine government has appointed ad-hoc bodies with authority over other agencies to coordinate and mobilize action and resources in recovering from large-scale disasters. We will argue that this approach be institutionalized within the government structure to enable a more efficient and effective disaster risk reduction and management system.Keywords: risk reduction and management, recovery, governance, typhoon haiyan response and recovery
Procedia PDF Downloads 2866170 Recession Rate of Gangotri and Its Tributary Glacier, Garhwal Himalaya, India through Kinematic GPS Survey and Satellite Data
Authors: Harish Bisht, Bahadur Singh Kotlia, Kireet Kumar
Abstract:
In order to reconstruct past retreating rates, total area loss, volume change and shift in snout position were measured through multi-temporal satellite data from 1989 to 2016 and kinematic GPS survey from 2015 to 2016. The results obtained from satellite data indicate that in the last 27 years, Chaturangi glacier snout has retreated 1172.57 ± 38.3 m (average 45.07 ± 4.31 m/year) with a total area and volume loss of 0.626 ± 0.001 sq. Km and 0.139 Km³, respectively. The field measurements through differential global positioning system survey revealed that the annual retreating rate was 22.84 ± 0.05 m/year. The large variations in results derived from both the methods are probably because of higher difference in their accuracy. Snout monitoring of the Gangotri glacier during the ablation season (May to September) in the years 2005 and 2015 reveals that the retreating rate has been comparatively more declined than that shown by the earlier studies. The GPS dataset shows that the average recession rate is 10.26 ± 0.05 m/year. In order to determine the possible causes of decreased retreating rate, a relationship between debris thickness and melt rate was also established by using ablation stakes. The present study concludes that remote sensing method is suitable for large area and long term study, while kinematic GPS is more appropriate for the annual monitoring of retreating rate of glacier snout. The present study also emphasizes on mapping of all the tributary glaciers in order to assess the overall changes in the main glacier system and its health.Keywords: Chaturangi glacier, Gangotri glacier, glacier snout, kinematic global positioning system, retreat rate
Procedia PDF Downloads 1436169 Improvement of Water Quality of Al Asfar Lake Using Constructed Wetland System
Authors: Jamal Radaideh
Abstract:
Al-Asfar Lake is located about 14 km east of Al-Ahsa and is one of the most important wetland lakes in the Al Ahsa/Eastern Province of Saudi Arabia. Al-Ahsa is may be the largest oasis in the world, having an area of 20,000 hectares, in addition, it is of the largest and oldest agricultural centers in the region. The surplus farm irrigation water beside additional water supplied by treated wastewater from Al-Hofuf sewage station is collected by a drainage network and discharged into Al-Asfar Lake. The lake has good wetlands, sand dunes as well as large expanses of open and shallow water. Salt tolerant vegetation is present in some of the shallow areas around the lake, and huge stands of Phragmites reeds occur around the lake. The lake presents an important habitat for wildlife and birds, something not expected to find in a large desert. Although high evaporation rates in the range of 3250 mm are common, the water remains in the evaporation lakes during all seasons of the year is used to supply cattle with drinking water and for aquifer recharge. Investigations showed that high concentrations of nitrogen (N), phosphorus (P), biological oxygen demand (BOD), chemical oxygen demand (COD) and salinity discharge to Al Asfar Lake from the D2 drain exist. It is expected that the majority of BOD, COD and N originates from wastewater discharge and leachate from surplus irrigation water which also contribute to the majority of P and salinity. The significant content of nutrients and biological oxygen demand reduces available oxygen in the water. The present project aimed to improve the water quality of the lake using constructed wetland trains which will be built around the lake. Phragmites reeds, which already occur around the lake, will be used.Keywords: Al Asfar lake, constructed wetland, water quality, water treatment
Procedia PDF Downloads 4466168 Barriers and Enablers to Climate and Health Adaptation Planning in Small Urban Areas in the Great Lakes Region
Authors: Elena Cangelosi, Wayne Beyea
Abstract:
This research expands the resilience planning literature by exploring the barriers and enablers to climate and health adaptation planning for small urban, coastal Great Lakes communities. With funding from the United States Centers for Disease Control and Prevention (CDC) Climate Ready City and States Initiative, this research took place during a 3-year pilot intervention project which integrates urban planning and public health. The project used the CDC’s Building Resilience Against Climate Effects (BRACE) framework to prevent or reduce the human health impacts from climate change in Marquette County, Michigan. Using a deliberation with the analysis planning process, interviews, focus groups, and community meetings with over 25 stakeholder groups and over 100 participants identified the area’s climate-related health concerns and adaptation interventions to address those concerns. Marquette County, on the shores of Lake Superior, the largest of the Great Lakes, was selected for the project based on their existing adaptive capacity and proactive approach to climate adaptation planning. With Marquette County as the context, this study fills a gap in the adaptation literature, which currently heavily emphasizes large-urban or agriculturally-based rural areas, and largely neglects small urban areas. This research builds on the qualitative case-study, survey, and interview approach established by previous researchers on contextual barriers and enablers for adaptation planning. This research uses a case study approach, including surveys and interviews of public officials, to identify the barriers and enablers for climate and health adaptation planning for small-urban areas within a large, non-agricultural, Great Lakes county. The researchers hypothesize that the barriers and enablers will, in some cases, overlap those found in other contexts, but in many cases, will be unique to a rural setting. The study reveals that funding, staff capacity, and communication across a large, rural geography act as the main barriers, while strong networks and collaboration, interested leaders, and community interest through a strong human-land connection act as the primary enablers. Challenges unique to rural areas are revealed, including weak opportunities for grant funding, large geographical distances, communication challenges with an aging and remote population, and the out-migration of education residents. Enablers that may be unique to rural contexts include strong collaborative relationships across jurisdictions for regional work and strong connections between residents and the land. As the factors that enable and prevent climate change planning are highly contextual, understanding, and appropriately addressing the unique factors at play for small-urban communities is key for effective planning in those areas. By identifying and addressing the barriers and enablers to climate and health adaptation planning for small-urban, coastal areas, this study can help Great Lakes communities appropriately build resilience to the adverse impacts of climate change. In addition, this research expands the breadth of research and understanding of the challenges and opportunities planners confront in the face of climate change.Keywords: climate adaptation and resilience, climate change adaptation, climate change and urban resilience, governance and urban resilience
Procedia PDF Downloads 1206167 A Conceptual Framework of Impact of Lean on the Performance of Construction Industry
Authors: Jaber Shurrab, Matloub Hussain
Abstract:
The rapid pace of changes in the construction industry, technological advancements, and rising costs present tremendous challenges for project managers. Project managers are under severe pressure to minimize the waste, improve the efficiency of the entire operations and the philosophy of ‘lean thinking’ so that ‘more could be achieved with less’ is becoming very popular. Though, lean management has strong roots in manufacturing industry and over the last decade lean philosophy has started gaining attention in the service industry as well. However, little has been known in the context of waste minimization and lean implementation in the construction industry and this paper deals with this important issue. The primary objective of this paper is to propose a conceptual framework for the exploration of appropriate lean techniques applicable to medium and large construction companies and measure their impact on the competitiveness and economic performance of construction companies of United Arab Emirates (UAE). To this end, a comprehensive literature review and interviews with eight project managers of medium and large construction companies of UAE have been conducted. It has been found that competitive, reduce waste and costs are critical to the construction industry. This is an ongoing research in lean management, giving project managers a practical framework for improving the efficiency of their project through various lean techniques. Originality/value: Research significance emphasizes increasing the effectiveness of the construction industry, influences the development of lean construction framework which improves lean construction practices using the lean techniques. This contributes to the effort of applying lean techniques in the construction industry. Limited publications were done in the construction industry mainly in United Arab Emirates (UAE) compared to the lean manufacturing. This research will recommend a systematic approach for the implementing of the anticipated framework within a cyclical look-ahead period and emphasizes the practical implications of the proposed approach.Keywords: construction, lean, lean manufacturing, waste
Procedia PDF Downloads 2836166 Numerical Tools for Designing Multilayer Viscoelastic Damping Devices
Authors: Mohammed Saleh Rezk, Reza Kashani
Abstract:
Auxiliary damping has gained popularity in recent years, especially in structures such as mid- and high-rise buildings. Distributed damping systems (typically viscous and viscoelastic) or reactive damping systems (such as tuned mass dampers) are the two types of damping choices for such structures. Distributed VE dampers are normally configured as braces or damping panels, which are engaged through relatively small movements between the structural members when the structure sways under wind or earthquake loading. In addition to being used as stand-alone dampers in distributed damping applications, VE dampers can also be incorporated into the suspension element of tuned mass dampers (TMDs). In this study, analytical and numerical tools for modeling and design of multilayer viscoelastic damping devices to be used in dampening the vibration of large structures are developed. Considering the limitations of analytical models for the synthesis and analysis of realistic, large, multilayer VE dampers, the emphasis of the study has been on numerical modeling using the finite element method. To verify the finite element models, a two-layer VE damper using ½ inch synthetic viscoelastic urethane polymer was built, tested, and the measured parameters were compared with the numerically predicted ones. The numerical model prediction and experimentally evaluated damping and stiffness of the test VE damper were in very good agreement. The effectiveness of VE dampers in adding auxiliary damping to larger structures is numerically demonstrated by chevron bracing one such damper numerically into the model of a massive frame subject to an abrupt lateral load. A comparison of the responses of the frame to the aforementioned load, without and with the VE damper, clearly shows the efficacy of the damper in lowering the extent of frame vibration.Keywords: viscoelastic, damper, distributed damping, tuned mass damper
Procedia PDF Downloads 1066165 Modeling and Simulating Productivity Loss Due to Project Changes
Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier
Abstract:
The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation
Procedia PDF Downloads 2376164 Developing a Cultural Policy Framework for Small Towns and Cities
Authors: Raymond Ndhlovu, Jen Snowball
Abstract:
It has long been known that the Cultural and Creative Industries (CCIs) have the potential to aid in physical, social and economic renewal and regeneration of towns and cities, hence their importance when dealing with regional development. The CCIs can act as a catalyst for activity and investment in an area because the ‘consumption’ of cultural activities will lead to the activities and use of other non-cultural activities, for example, hospitality development including restaurants and bars, as well as public transport. ‘Consumption’ of cultural activities also leads to employment creation, and diversification. However, CCIs tend to be clustered, especially around large cities. There is, moreover, a case for development of CCIs around smaller towns and cities, because they do not rely on high technology inputs, and long supply chains, and, their direct link to rural and isolated places makes them vital in regional development. However, there is currently little research on how to craft cultural policy for regions with smaller towns and cities. Using the Sarah Baartman District (SBDM) in South Africa as an example, this paper describes the process of developing cultural policy for a region that has potential, and existing, cultural clusters, but currently no one, coherent policy relating to CCI development. The SBDM was chosen as a case study because it has no large cities, but has some CCI clusters, and has identified them as potential drivers of local economic development. The process of developing cultural policy is discussed in stages: Identification of what resources are present; including human resources, soft and hard infrastructure; Identification of clusters; Analysis of CCI labour markets and ownership patterns; Opportunities and challenges from the point of view of CCIs and other key stakeholders; Alignment of regional policy aims with provincial and national policy objectives; and finally, design and implementation of a regional cultural policy.Keywords: cultural and creative industries, economic impact, intrinsic value, regional development
Procedia PDF Downloads 2326163 Synthesis of Electrospun Polydimethylsiloxane (PDMS)/Polyvinylidene Fluoriure (PVDF) Nanofibrous Membranes for CO₂ Capture
Authors: Wen-Wen Wang, Qian Ye, Yi-Feng Lin
Abstract:
Carbon dioxide emissions are expected to increase continuously, resulting in climate change and global warming. As a result, CO₂ capture has attracted a large amount of research attention. Among the various CO₂ capture methods, membrane technology has proven to be highly efficient in capturing CO₂, because it can be scaled up, low energy consumptions and small area requirements for use by the gas separation. Various nanofibrous membranes were successfully prepared by a simple electrospinning process. The membrane contactor combined with chemical absorption and membrane process in the post-combustion CO₂ capture is used in this study. In a membrane contactor system, the highly porous and water-repellent nanofibrous membranes were used as a gas-liquid interface in a membrane contactor system for CO₂ absorption. In this work, we successfully prepared the polyvinylidene fluoride (PVDF) porous membranes with an electrospinning process. Afterwards, the as-prepared water-repellent PVDF porous membranes were used for the CO₂ capture application. However, the pristine PVDF nanofibrous membranes were wetted by the amine absorbents, resulting in the decrease in the CO₂ absorption flux, the hydrophobic polydimethylsiloxane (PDMS) materials were added into the PVDF nanofibrous membranes to improve the solvent resistance of the membranes. To increase the hydrophobic properties and CO₂ absorption flux, more hydrophobic surfaces of the PDMS/PVDF nanofibrous membranes are obtained by the grafting of fluoroalkylsilane (FAS) on the membranes surface. Furthermore, the highest CO₂ absorption flux of the PDMS/PVDF nanofibrous membranes is reached after the FAS modification with four times. The PDMS/PVDF nanofibrous membranes with 60 wt% PDMS addition can be a long and continuous operation of the CO₂ absorption and regeneration experiments. It demonstrates the as-prepared PDMS/PVDF nanofibrous membranes could potentially be used for large-scale CO₂ absorption during the post-combustion process in power plants.Keywords: CO₂ capture, electrospinning process, membrane contactor, nanofibrous membranes, PDMS/PVDF
Procedia PDF Downloads 2726162 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 74