Search results for: real rational matrix transfer functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11921

Search results for: real rational matrix transfer functions

2171 Construction of a Dynamic Model of Cerebral Blood Circulation for Future Integrated Control of Brain State

Authors: Tomohiko Utsuki

Abstract:

Currently, brain resuscitation becomes increasingly important due to revising various clinical guidelines pertinent to emergency care. In brain resuscitation, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) is required for stabilizing physiological state of brain, and is described as the essential treatment points in many guidelines of disorder and/or disease such as brain injury, stroke, and encephalopathy. Thus, an integrated control system of BT, ICP, and CBF will greatly contribute to alleviating the burden on medical staff and improving treatment effect in brain resuscitation. In order to develop such a control system, models related to BT, ICP, and CBF are required for control simulation, because trial and error experiments using patients are not ethically allowed. A static model of cerebral blood circulation from intracranial arteries and vertebral artery to jugular veins has already constructed and verified. However, it is impossible to represent the pooling of blood in blood vessels, which is one cause of cerebral hypertension in this model. And, it is also impossible to represent the pulsing motion of blood vessels caused by blood pressure change which can have an affect on the change of cerebral tissue pressure. Thus, a dynamic model of cerebral blood circulation is constructed in consideration of the elasticity of the blood vessel and the inertia of the blood vessel wall. The constructed dynamic model was numerically analyzed using the normal data, in which each arterial blood flow in cerebral blood circulation, the distribution of blood pressure in the Circle of Willis, and the change of blood pressure along blood flow were calculated for verifying against physiological knowledge. As the result, because each calculated numerical value falling within the generally known normal range, this model has no problem in representing at least the normal physiological state of the brain. It is the next task to verify the accuracy of the present model in the case of disease or disorder. Currently, the construction of a migration model of extracellular fluid and a model of heat transfer in cerebral tissue are in progress for making them parts of an integrated model of brain physiological state, which is necessary for developing an future integrated control system of BT, ICP and CBF. The present model is applicable to constructing the integrated model representing at least the normal condition of brain physiological state by uniting with such models.

Keywords: dynamic model, cerebral blood circulation, brain resuscitation, automatic control

Procedia PDF Downloads 132
2170 AER Model: An Integrated Artificial Society Modeling Method for Cloud Manufacturing Service Economic System

Authors: Deyu Zhou, Xiao Xue, Lizhen Cui

Abstract:

With the increasing collaboration among various services and the growing complexity of user demands, there are more and more factors affecting the stable development of the cloud manufacturing service economic system (CMSE). This poses new challenges to the evolution analysis of the CMSE. Many researchers have modeled and analyzed the evolution process of CMSE from the perspectives of individual learning and internal factors influencing the system, but without considering other important characteristics of the system's individuals (such as heterogeneity, bounded rationality, etc.) and the impact of external environmental factors. Therefore, this paper proposes an integrated artificial social model for the cloud manufacturing service economic system, which considers both the characteristics of the system's individuals and the internal and external influencing factors of the system. The model consists of three parts: the Agent model, environment model, and rules model (Agent-Environment-Rules, AER): (1) the Agent model considers important features of the individuals, such as heterogeneity and bounded rationality, based on the adaptive behavior mechanisms of perception, action, and decision-making; (2) the environment model describes the activity space of the individuals (real or virtual environment); (3) the rules model, as the driving force of system evolution, describes the mechanism of the entire system's operation and evolution. Finally, this paper verifies the effectiveness of the AER model through computational and experimental results.

Keywords: cloud manufacturing service economic system (CMSE), AER model, artificial social modeling, integrated framework, computing experiment, agent-based modeling, social networks

Procedia PDF Downloads 49
2169 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration

Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine

Abstract:

The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.

Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions

Procedia PDF Downloads 169
2168 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids

Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho

Abstract:

In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.

Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model

Procedia PDF Downloads 91
2167 Official Secrecy and Confidentiality in Tax Administration and Its Impact on Right to Access Information: Nigerian Perspectives

Authors: Kareem Adedokun

Abstract:

Official secrecy is one of the colonial vestiges which upholds non – disclosure of essential information for public consumption. Information, though an indispensable tool in tax administration, is not to be divulged by any person in an official duty of the revenue agency. As a matter o fact, the Federal Inland Revenue Service (Establishment) Act, 2007 emphasizes secrecy and confidentiality in dealing with tax payer’s document, information, returns and assessment in a manner reminiscent of protecting tax payer’s privacy in all situations. It is so serious that any violation attracts criminal sanction. However, Nigeria, being a democratic and egalitarian state recently enacted Freedom of Information Act which heralded in openness in governance and takes away the confidentialities associated with official secrets Laws. Official secrecy no doubts contradicts the philosophy of freedom of information but maintaining a proper balance between protected rights of tax payers and public interest which revenue agency upholds is an uphill task. Adopting the Doctrinal method, therefore, the author of this paper probes into the real nature of the relationship between taxpayers and Revenue Agencies. It also interfaces official secrecy with the doctrine of Freedom of Information and consequently queries the retention of non – disclosure clause under Federal Inland Revenue Service (Establishment) Act (FIRSEA) 2007. The paper finds among others that non – disclosure provision in tax statutes particularly as provided for in FIRSEA is not absolute; so also is the constitutional rights and freedom of information and unless the non – disclosure clause finds justification under any recognized exemption provided under the Freedom of Information Act, its retention is antithesis to democratic ethos and beliefs as it may hinder public interest and public order.

Keywords: confidentiality, information, official secrecy, tax administration

Procedia PDF Downloads 291
2166 Effect of Starch and Plasticizer Types and Fiber Content on Properties of Polylactic Acid/Thermoplastic Starch Blend

Authors: Rangrong Yoksan, Amporn Sane, Nattaporn Khanoonkon, Chanakorn Yokesahachart, Narumol Noivoil, Khanh Minh Dang

Abstract:

Polylactic acid (PLA) is the most commercially available bio-based and biodegradable plastic at present. PLA has been used in plastic related industries including single-used containers, disposable and environmentally friendly packaging owing to its renewability, compostability, biodegradability, and safety. Although PLA demonstrates reasonably good optical, physical, mechanical, and barrier properties comparable to the existing petroleum-based plastics, its brittleness and mold shrinkage as well as its price are the points to be concerned for the production of rigid and semi-rigid packaging. Blending PLA with other bio-based polymers including thermoplastic starch (TPS) is an alternative not only to achieve a complete bio-based plastic, but also to reduce the brittleness, shrinkage during molding and production cost of the PLA-based products. TPS is a material produced mainly from starch which is cheap, renewable, biodegradable, compostable, and non-toxic. It is commonly prepared by a plasticization of starch under applying heat and shear force. Although glycerol has been reported as one of the most plasticizers used for preparing TPS, its migration caused the surface stickiness of the TPS products. In some cases, mixed plasticizers or natural fibers have been applied to impede the retrogradation of starch or reduce the migration of glycerol. The introduction of fibers into TPS-based materials could reinforce the polymer matrix as well. Therefore, the objective of the present research is to study the effect of starch type (i.e. native starch and phosphate starch), plasticizer type (i.e. glycerol and xylitol with a weight ratio of glycerol to xylitol of 100:0, 75:25, 50:50, 25:75, and 0:100), and fiber content (i.e. in the range of 1-25 % wt) on properties of PLA/TPS blend and composite. PLA/TPS blends and composites were prepared using a twin-screw extruder and then converted into dumbbell-shaped specimens using an injection molding machine. The PLA/TPS blends prepared by using phosphate starch showed higher tensile strength and stiffness than the blends prepared by using the native one. In contrast, the blends from native starch exhibited higher extensibility and heat distortion temperature (HDT) than those from the modified starch. Increasing xylitol content resulted in enhanced tensile strength, stiffness, and water resistance, but decreased extensibility and HDT of the PLA/TPS blend. Tensile properties and hydrophobicity of the blend could be improved by incorporating silane treated-jute fibers.

Keywords: polylactic acid, thermoplastic starch, Jute fiber, composite, blend

Procedia PDF Downloads 394
2165 Colorimetric Measurement of Dipeptidyl Peptidase IV (DPP IV) Activity via Peptide Capped Gold Nanoparticles

Authors: H. Aldewachi, M. Hines, M. McCulloch, N. Woodroofe, P. Gardiner

Abstract:

DPP-IV is an enzyme whose expression is affected in a variety of diseases, therefore, has been identified as possible diagnostic or prognostic marker for various tumours, immunological, inflammatory, neuroendocrine, and viral diseases. Recently, DPP-IV enzyme has been identified as a novel target for type II diabetes treatment where the enzyme is involved. There is, therefore, a need to develop sensitive and specific methods that can be easily deployed for the screening of the enzyme either as a tool for drug screening or disease marker in biological samples. A variety of assays have been introduced for the determination of DPP-IV enzyme activity using chromogenic and fluorogenic substrates, nevertheless these assays either lack the required sensitivity especially in inhibited enzyme samples or displays low water solubility implying difficulty for use in vivo samples in addition to labour and time-consuming sample preparation. In this study, novel strategies based on exploiting the high extinction coefficient of gold nanoparticles (GNPs) are investigated in order to develop fast, specific and reliable enzymatic assay by investigating synthetic peptide sequences containing a DPP IV cleavage site and coupling them to GNPs. The DPP IV could be detected by colorimetric response of peptide capped GNPs (P-GNPS) that could be monitored by a UV-visible spectrophotometer or even naked eyes, and the detection limit could reach 0.01 unit/ml. The P-GNPs, when subjected to DPP IV, showed excellent selectivity compared to other proteins (thrombin and human serum albumin) , which led to prominent colour change. This provided a simple and effective colorimetric sensor for on-site and real-time detection of DPP IV.

Keywords: gold nanoparticles, synthetic peptides, colorimetric detection, DPP-IV enzyme

Procedia PDF Downloads 277
2164 Archaic Ontologies Nowadays: Music of Rituals

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

Many of the interrogations or dilemmas of the contemporary world found the answer in what was generically called the appeal to matrix. This genuine spiritual exercise of re-connection of the present to origins, to the primary source, revealed the ontological condition of timelessness, ahistorical, immutable (epi)phenomena, of those pure essences concentrated in the archetypal-referential layer of the human existence. The musical creation was no exception to this trend, the impasse generated by the deterministic excesses of the whole serialism or, conversely, by some questionable results of the extreme indeterminism proper to the avant-garde movements, stimulating the orientation of many composers to rediscover a universal grammar, as an emanation of a new ‘collective’ order (reverse of the utopian individualism). In this context, the music of oral tradition and therefore the world of the ancient modes represented a true revelation for the composers of the twentieth century, who were suddenly in front of some unsuspected (re)sources, with a major impact on all levels of edification of the musical work: morphology, syntax, timbrality, semantics etc. For the contemporary Romanian creators, the music of rituals, existing in the local archaic culture, opened unsuspected perspectives for which it meant to be a synthetic, inclusive and recoverer vision, where the primary (archetypal) genuine elements merge with the latest achievements of language of the European composers. Thus, anchored in a strong and genuine modal source, the compositions analysed in this paper evoke, in a manner as modern as possible, the atmosphere of some ancestral rituals such as: the invocation of rain during the drought (Paparudele, Scaloianul), funeral ceremony (Bocetul), traditions specific to the winter holidays and new year (Colinda, Cântecul de stea, Sorcova, Folklore traditional dances) etc. The reactivity of those rituals in the sound context of the twentieth century meant potentiating or resizing the archaic spirit of the primordial symbolic entities, in terms of some complexity levels generated by the technique of harmonies of chordal layers, of complex aggregates (gravitational or non-gravitational, geometric), of the mixture polyphonies and with global effect (group, mass), by the technique of heterophony, of texture and cluster, leading to the implementation of some processes of collective improvisation and instrumental theatre.

Keywords: archetype, improvisation, polyphony, ritual, instrumental theatre

Procedia PDF Downloads 274
2163 Antioxidant, Hypoglycemic and Hypotensive Effects Affected by Various Molecular Weights of Cold Water Extract from Pleurotus Citrinopileatus

Authors: Pao-Huei Chen, Shu-Mei Lin, Yih-Ming Weng, Zer-Ran Yu, Be-Jen Wang

Abstract:

Pancreatic α-amylase and intestinal α-glucosidase are the critical enzymes for the breakdown of complex carbohydrates into di- or mono-saccharide, which play an important role in modulating postprandial blood sugars. Angiotensin converting enzyme (ACE) converts inactive angiotensin-I into active angiotensin-II, which subsequently increase blood pressure through triggering vasoconstriction and aldosterone secretion. Thus, inhibition of carbohydrate-digestion enzymes and ACE will help the management of blood glucose and blood pressure, respectively. Studies showed Pleurotus citrinopileatus (PC), an edible mushroom and commonly cultured in oriental countries, exerted anticancer, immune improving, antioxidative, hypoglycemic and hypolipidemic effects. Previous studies also showed various molecular weights (MW) fractioned from extracts may affect biological activities due to varying contents of bioactive components. Thus, the objective of this study is to investigate the in vitro antioxidant, hypoglycemic and hypotenstive effects and distribution of active compounds of various MWs of cold water extract from P. citrinopileatus (CWEPC). CWEPC was fractioned into four various MW fractions, PC-I (<1 kDa), PC-II (1-3.5 kDa), PC-III (3.5-10 kDa), and PC-IV (>10 kDa), using an ultrafiltration system. The physiological activities, including antioxidant activities, the inhibition capabilities of pancreatic α-amylase, intestinal α-glucosidase, and hypertension-linked ACE, and the active components, including polysaccharides, protein, and phenolic contents, of CWEPC and four fractions were determined. The results showed that fractions with lower MW exerted a higher antioxidant activity (p<0.05), which was positively correlated to the levels of total phenols. In contrast, the inhibition effects on the activities of α-amylase, α-glucosidase, and ACE of PC-IV fraction were significantly higher than CWEPC and the other three low MW fractions (< 10 kDa), which was more related to protein contents. The inhibition capability of CWEPC and PC-IV on α-amylase activity was 1/13.4 to 1/2.7 relative to that of acarbose (positive control), respectively. However, the inhibitory ability of PC-IV on α-glucosidase (IC50 = 0.5 mg/mL) was significantly higher than acarbose (IC50 = 1.7 mg/mL). Kinetic data revealed that PC-IV fraction followed a non-competitive inhibition on α-glucosidase activity. In conclusion, the distribution of various bioactive components contribute to the functions of different MW fractions on oxidative stress prevention, and blood pressure and glucose modulation.

Keywords: α-Amylase, angiotensin converting enzyme, α-Glucosidase, Pleurotus citrinopileatus

Procedia PDF Downloads 439
2162 Experimental investigation on the lithium-Ion Battery Thermal Management System Based on Micro Heat Pipe Array in High Temperature Environment

Authors: Ruyang Ren, Yaohua Zhao, Yanhua Diao

Abstract:

The intermittent and unstable characteristics of renewable energy such as solar energy can be effectively solved through battery energy storage system. Lithium-ion battery is widely used in battery energy storage system because of its advantages of high energy density, small internal resistance, low self-discharge rate, no memory effect and long service life. However, the performance and service life of lithium-ion battery is seriously affected by its operating temperature. Thus, the safety operation of the lithium-ion battery module is inseparable from an effective thermal management system (TMS). In this study, a new type of TMS based on micro heat pipe array (MHPA) for lithium-ion battery is established, and the TMS is applied to a battery energy storage box that needs to operate at a high temperature environment of 40 °C all year round. MHPA is a flat shape metal body with high thermal conductivity and excellent temperature uniformity. The battery energy storage box is composed of four battery modules, with a nominal voltage of 51.2 V, a nominal capacity of 400 Ah. Through the excellent heat transfer characteristics of the MHPA, the heat generated by the charge and discharge process can be quickly transferred out of the battery module. In addition, if only the MHPA cannot meet the heat dissipation requirements of the battery module, the TMS can automatically control the opening of the external fan outside the battery module according to the temperature of the battery, so as to further enhance the heat dissipation of the battery module. The thermal management performance of lithium-ion battery TMS based on MHPA is studied experimentally under different ambient temperatures and the condition to turn on the fan or not. Results show that when the ambient temperature is 40 °C and the fan is not turned on in the whole charge and discharge process, the maximum temperature of the battery in the energy storage box is 53.1 °C and the maximum temperature difference in the battery module is 2.4 °C. After the fan is turned on in the whole charge and discharge process, the maximum temperature is reduced to 50.1 °C, and the maximum temperature difference is reduced to 1.7 °C. Obviously, the lithium-ion battery TMS based on MHPA not only could control the maximum temperature of the battery below 55 °C, but also ensure the excellent temperature uniformity of the battery module. In conclusion, the lithium-ion battery TMS based on MHPA can ensure the safe and stable operation of the battery energy storage box in high temperature environment.

Keywords: heat dissipation, lithium-ion battery thermal management, micro heat pipe array, temperature uniformity

Procedia PDF Downloads 139
2161 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 254
2160 Yield Level, Variability and Yield Gap of Maize (Zea Mays L.) Under Variable Climate Condition of the Semi-arid Central Rift Valley of Ethiopia

Authors: Fitih Ademe, Kibebew Kibret, Sheleme Beyene, Mezgebu Getnet, Gashaw Meteke

Abstract:

Soil moisture and nutrient availability are the two key edaphic factors that affect crop yields and are directly or indirectly affected by climate variability and change. The study examined climate-induced yield level, yield variability and gap of maize during 1981-2010 main growing season in the Central Rift Valley (CRV) of Ethiopia. Pearson correlation test was employed to see the relationship between climate variables and yield. The coefficient of variation (CV) was used to analyze annual yield variability. Decision Support System for Agro-technology Transfer cropping system model (DSSAT-CSM) was used to simulate the growth and yield of maize for the study period. The result indicated that maize grain yield was strongly (P<0.01) and positively correlated with seasonal rainfall (r=0.67 at Melkassa and r = 0.69 at Ziway) in the CRV while day temperature affected grain yield negatively (r= -0.44) at Ziway (P<0.05) during the simulation period. Variations in total seasonal rainfall at Melkassa and Ziway explained 44.9 and 48.5% of the variation in yield, respectively, under optimum nutrition. Following variation in rainfall, high yield variability (CV=23.5%, Melkassa and CV=25.3%, Ziway) was observed for optimum nutrient simulation than the corresponding nutrient limited simulation (CV=16%, Melkassa and 24.1%, Ziway) in the study period. The observed farmers’ yield was 72, 52 and 43% of the researcher-managed, water-limited and potential yield of the crop, respectively, indicating a wide maize yield gap in the region. The study revealed rainfed crop production in the CRV is prone to yield variabilities due to its high dependence on seasonal rainfall and nutrient level. Moreover, the high coefficient of variation in the yield gap for the 30-year period also foretells the need for dependable water supply at both locations. Given the wide yield gap especially during lower rainfall years across the simulation periods, it signifies the requirement for a more dependable application of irrigation water and a potential shift to irrigated agriculture; hence, adopting options that can improve water availability and nutrient use efficiency would be crucial for crop production in the area.

Keywords: climate variability, crop model, water availability, yield gap, yield variability

Procedia PDF Downloads 37
2159 Traditional and New Residential Architecture in the Approach of Sustainability in the Countryside after the Earthquake

Authors: Zeynep Tanriverdi̇

Abstract:

Sustainable architecture is a design approach that provides healthy, comfortable, safe, clean space production as well as utilizes minimum resources for efficient and economical use of natural resources and energy. Traditional houses located in rural areas are sustainable structures built at the design and implementation stage in accordance with the climatic environmental data of the region and also effectively using natural energy resources. The fact that these structures are located in an earthquake geography like Türkiye brings their earthquake resistance to the agenda. Since the construction of these structures, which contain the architectural and technological cultural knowledge of the past, is shaped according to the characteristics of the regions where they are located, their resistance to earthquakes also differs. Analyses in rural areas after the earthquake show that there are light-damaged structures that can survive, severely damaged structures, and completely destroyed structures. In this regard, experts can implement repair, consolidation, and reconstruction applications, respectively. While simple repair interventions are carried out in accordance with the original data in traditional houses that have shown great resistance to earthquakes, reinforcement work blended with new technologies can be applied in damaged structures. In reconstruction work, a wide variety of applications can be seen with the possibilities of modern technologies. In rural areas experiencing earthquakes around the world, there are experimental new housing applications that are renewable, environmentally friendly, and sustainable with modern construction techniques in the light of scientific data. With these new residences, it is aimed to create earthquake-resistant, economical, healthy, and pain-relieving therapy spaces for people whose daily lives have been interrupted by disasters. In this study, the preservation of high earthquake-prone rural areas will be discussed through the knowledge transfer of traditional architecture and also permanent housing practices using new sustainable technologies to improve the area. In this way, it will be possible to keep losses to a minimum with sustainable, reliable applications prepared for the worst aspects of the disaster situation and to establish a link between the knowledge of the past and the new technologies of the future.

Keywords: sustainability, conservation, traditional construction systems and materials, new technologies, earthquake resistance

Procedia PDF Downloads 32
2158 Urban River As Living Infrastructure: Tidal Flooding And Sea Level Rise In A Working Waterway In Hampton Roads, Virginia

Authors: William Luke Hamel

Abstract:

Existing conceptions of urban flooding caused by tidal fluctuations and sea-level rise have been inadequately conceptualized by metrics of resilience and methods of flow modeling. While a great deal of research has been devoted to the effects of urbanization on pluvial flooding, the kind of tidal flooding experienced by locations like Hampton Roads, Virginia, has not been adequately conceptualized as being a result of human factors such as urbanization and gray infrastructure. Resilience from sea level rise and its associated flooding has been pioneered in the region with the 2015 Norfolk Resilience Plan from 100 Resilient Cities as well as the 2016 Norfolk Vision 2100 plan, which envisions different patterns of land use for the city. Urban resilience still conceptualizes the city as having the ability to maintain an equilibrium in the face of disruptions. This economic and social equilibrium relies on the Elizabeth River, narrowly conceptualized. Intentionally or accidentally, the river was made to be a piece of infrastructure. Its development was meant to serve the docks, shipyards, naval yards, and port infrastructure that gives the region so much of its economic life. Inasmuch as it functions to permit the movement of cargo; the raising and lowering of ships to be repaired, commissioned, or decommissioned; or the provisioning of military vessels, the river as infrastructure is functioning properly. The idea that the infrastructure is malfunctioning when high tides and sea-level rise create flooding is predicated on the idea that the infrastructure is truly a human creation and can be controlled. The natural flooding cycles of an urban river, combined with the action of climate change and sea-level rise, are only abnormal so much as they encroach on the development that first encroached on the river. The urban political ecology of water provides the ability to view the river as an infrastructural extension of urban networks while also calling for its emancipation from stationarity and human control. Understanding the river and city as a hydrosocial territory or as a socio-natural system liberates both actors from the duality of the natural and the social while repositioning river flooding as a normal part of coexistence on a floodplain. This paper argues for the adoption of an urban political ecology lens in the analysis and governance of urban rivers like the Elizabeth River as a departure from the equilibrium-seeking and stability metrics of urban resilience.

Keywords: urban flooding, political ecology, Elizabeth river, Hampton roads

Procedia PDF Downloads 135
2157 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems

Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue

Abstract:

The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.

Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure

Procedia PDF Downloads 293
2156 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 229
2155 Other End of the Leash: The Volunteer Handlers Perspective of Animal-Assisted Interventions

Authors: Julie A. Carberry, Victor Maddalena

Abstract:

Animal-Assisted Interventions (AAIs) have existed in various forms for centuries. In the past 30 years, there has been a dramatic increase in popularity. AAIs are now part of the lives of persons of all ages in many types of institutions. Anecdotal evidence of the benefits of AAIs have led to widespread adoption, yet there remains a lack of solid research base for support. The research question was, what are the lived experiences of AAI volunteer handlers are? An interpretive phenomenological methodology was used for this qualitative study. Data were collected from 1 - 2 hour-long semi-structured interviews and 1 observational field visit. All interviews were conducted, transcribed, and coded for themes by the principal investigator. Participants must have been an active St. John Ambulance Therapy Dog Program volunteer for a least one year. In total, 14 volunteer handlers, along with some of their dogs, were included. The St. John Ambulance is a not for profit organization that provides training and community services to Canadians. The Therapy Dog Program is 1 of the 4 nationally recognized core community service programs. The program incorporates dogs in the otherwise traditional therapeutic intervention of friendly visitation with clients. The lack of formal objectives and goals, and a trained therapist defines the program as an Animal-Assisted Activity (AAA), which is a type of AAI. Since the animals incorporated are dogs, the program is specifically a Canine-Assisted Activity (CAA), which is a type of Canine-Assisted Intervention (CAI). Six themes emerged from the analysis of the data: (a) a win-win-win situation for all parties involved – volunteer handlers, clients, and the dogs, (b) being on the other end of the leash: functions of the role of volunteer handler, (c) the importance of socialization: from spreading smiles to creating meaningful connections, (d) the role of the dog: initiating interaction and providing comfort, (e) an opportunity to feel good and destress, and (f) altruism versus personal rewards. Other insights were found regarding the program, clients, and staff. Possible implications from this research include increased organizational recruitment and retention of volunteer handlers and as well as increased support for CAAs and other CAIs that incorporate teams of volunteer handlers and their dogs. This support could, in turn, add overall support for the acceptance and broad implementation of AAIs as an alternative and or complementary non-pharmaceutical therapeutic intervention.

Keywords: animal-assisted activity, animal-assisted intervention, canine-assisted activity, canine-assisted intervention, perspective, qualitative, volunteer handler

Procedia PDF Downloads 107
2154 Macroscopic Support Structure Design for the Tool-Free Support Removal of Laser Powder Bed Fusion-Manufactured Parts Made of AlSi10Mg

Authors: Tobias Schmithuesen, Johannes Henrich Schleifenbaum

Abstract:

The additive manufacturing process laser powder bed fusion offers many advantages over conventional manufacturing processes. For example, almost any complex part can be produced, such as topologically optimized lightweight parts, which would be inconceivable with conventional manufacturing processes. A major challenge posed by the LPBF process, however, is, in most cases, the need to use and remove support structures on critically inclined part surfaces (α < 45 ° regarding substrate plate). These are mainly used for dimensionally accurate mapping of part contours and to reduce distortion by absorbing process-related internal stresses. Furthermore, they serve to transfer the process heat to the substrate plate and are, therefore, indispensable for the LPBF process. A major challenge for the economical use of the LPBF process in industrial process chains is currently still the high manual effort involved in removing support structures. According to the state of the art (SoA), the parts are usually treated by simple hand tools (e.g., pliers, chisels) or by machining (e.g., milling, turning). New automatable approaches are the removal of support structures by means of wet chemical ablation and thermal deburring. According to the state of the art, the support structures are essentially adapted to the LPBF process and not to potential post-processing steps. The aim of this study is the determination of support structure designs that are adapted to the mentioned post-processing approaches. In the first step, the essential boundary conditions for complete removal by means of the respective approaches are identified. Afterward, a representative demonstrator part with various macroscopic support structure designs will be LPBF-manufactured and tested with regard to a complete powder and support removability. Finally, based on the results, potentially suitable support structure designs for the respective approaches will be derived. The investigations are carried out on the example of the aluminum alloy AlSi10Mg.

Keywords: additive manufacturing, laser powder bed fusion, laser beam melting, selective laser melting, post processing, tool-free, wet chemical ablation, thermal deburring, aluminum alloy, AlSi10Mg

Procedia PDF Downloads 66
2153 Online Information Seeking: A Review of the Literature in the Health Domain

Authors: Sharifah Sumayyah Engku Alwi, Masrah Azrifah Azmi Murad

Abstract:

The development of the information technology and Internet has been transforming the healthcare industry. The internet is continuously accessed to seek for health information and there are variety of sources, including search engines, health websites, and social networking sites. Providing more and better information on health may empower individuals, however, ensuring a high quality and trusted health information could pose a challenge. Moreover, there is an ever-increasing amount of information available, but they are not necessarily accurate and up to date. Thus, this paper aims to provide an insight of the models and frameworks related to online health information seeking of consumers. It begins by exploring the definition of information behavior and information seeking to provide a better understanding of the concept of information seeking. In this study, critical factors such as performance expectancy, effort expectancy, and social influence will be studied in relation to the value of seeking health information. It also aims to analyze the effect of age, gender, and health status as the moderator on the factors that influence online health information seeking, i.e. trust and information quality. A preliminary survey will be carried out among the health professionals to clarify the research problems which exist in the real world, at the same time producing a conceptual framework. A final survey will be distributed to five states of Malaysia, to solicit the feedback on the framework. Data will be analyzed using SPSS and SmartPLS 3.0 analysis tools. It is hoped that at the end of this study, a novel framework that can improve online health information seeking is developed. Finally, this paper concludes with some suggestions on the models and frameworks that could improve online health information seeking.

Keywords: information behavior, information seeking, online health information, technology acceptance model, the theory of planned behavior, UTAUT

Procedia PDF Downloads 241
2152 Layout Optimization of a Start-up COVID-19 Testing Kit Manufacturing Facility

Authors: Poojan Vora, Hardik Pancholi, Sanket Tajane, Harsh Shah, Elias Keedy

Abstract:

The global COVID-19 pandemic has affected the industry drastically in many ways. Even though the vaccine is being distributed quickly and despite the decreasing number of positive cases, testing is projected to remain a key aspect of the ‘new normal’. Improving existing plant layout and improving safety within the facility are of great importance in today’s industries because of the need to ensure productivity optimization and reduce safety risks. In practice, it is essential for any manufacturing plant to reduce nonvalue adding steps such as the movement of materials and rearrange similar processes. In the current pandemic situation, optimized layouts will not only increase safety measures but also decrease the fixed cost per unit manufactured. In our case study, we carefully studied the existing layout and the manufacturing steps of a new Texas start-up company that manufactures COVID testing kits. The effects of production rate are incorporated with the computerized relative allocation of facilities technique (CRAFT) algorithm to improve the plant layout and estimate the optimization parameters. Our work reduces the company’s material handling time and increases their daily production. Real data from the company are used in the case study to highlight the importance of colleges in fostering small business needs and improving the collaboration between college researchers and industries by using existing models to advance best practices.

Keywords: computerized relative allocation of facilities technique, facilities planning, optimization, start-up business

Procedia PDF Downloads 115
2151 Lung Tissue Damage under Diesel Exhaust Exposure: Modification of Proteins, Cells and Functions in Just 14 Days

Authors: Ieva Bruzauskaite, Jovile Raudoniute, Karina Poliakovaite, Danguole Zabulyte, Daiva Bironaite, Ruta Aldonyte

Abstract:

Introduction: Air pollution is a growing global problem which has been shown to be responsible for various adverse health outcomes. Immunotoxicity, such as dysregulated inflammation, has been proposed as one of the main mechanisms in air pollution-associated diseases. Chronic obstructive pulmonary disease (COPD) is among major morbidity and mortality causes worldwide and is characterized by persistent airflow limitation caused by the small airways disease (obstructive bronchiolitis) and irreversible parenchymal destruction (emphysema). Exact pathways explaining the air pollution induced and mediated disease states are still not clear. However, modern societies understand dangers of polluted air, seek to mitigate such effects and are in need for reliable biomarkers of air pollution. We hypothesise that post-translational modifications of structural proteins, e.g. citrullination, might be a good candidate biomarker. Thus, we have designed this study, where mice were exposed to diesel exhaust and the ongoing protein modifications and inflammation in lungs and other tissues were assessed. Materials And Methods: To assess the effects of diesel exhaust a in vivo study was designed. Mice (n=10) were subjected to everyday 2-hour exposure to diesel exhaust for 14 days. Control mice were treated the same way without diesel exhaust. The effects within lung and other tissues were assessed by immunohistochemistry of formalin-fixed and paraffin-embedded tissues. Levels of inflammation and citrullination related markers were investigated. Levels of parenchymal damage were also measured. Results: In vivo study corroborates our own data from in vitro and reveals diesel exhaust initiated inflammatory shift and modulation of lung peptidyl arginine deiminase 4 (PAD4), citrullination associated enzyme, levels. In addition, high levels of citrulline were observed in exposed lung tissue sections co-localising with increased parenchymal destruction. Conclusions: Subacute exposure to diesel exhaust renders mice lungs inflammatory and modifies certain structural proteins. Such structural changes of proteins may pave a pathways to lost/gain function of affected molecules and also propagate autoimmune processes within the lung and systemically.

Keywords: air pollution, citrullination, in vivo, lungs

Procedia PDF Downloads 117
2150 Inversion of the Spectral Analysis of Surface Waves Dispersion Curves through the Particle Swarm Optimization Algorithm

Authors: A. Cerrato Casado, C. Guigou, P. Jean

Abstract:

In this investigation, the particle swarm optimization (PSO) algorithm is used to perform the inversion of the dispersion curves in the spectral analysis of surface waves (SASW) method. This inverse problem usually presents complicated solution spaces with many local minima that make difficult the convergence to the correct solution. PSO is a metaheuristic method that was originally designed to simulate social behavior but has demonstrated powerful capabilities to solve inverse problems with complex space solution and a high number of variables. The dispersion curve of the synthetic soils is constructed by the vertical flexibility coefficient method, which is especially convenient for soils where the stiffness does not increase gradually with depth. The reason is that these types of soil profiles are not normally dispersive since the dominant mode of Rayleigh waves is usually not coincident with the fundamental mode. Multiple synthetic soil profiles have been tested to show the characteristics of the convergence process and assess the accuracy of the final soil profile. In addition, the inversion procedure is applied to multiple real soils and the final profile compared with the available information. The combination of the vertical flexibility coefficient method to obtain the dispersion curve and the PSO algorithm to carry out the inversion process proves to be a robust procedure that is able to provide good solutions for complex soil profiles even with scarce prior information.

Keywords: dispersion, inverse problem, particle swarm optimization, SASW, soil profile

Procedia PDF Downloads 155
2149 Challenges of Management of Acute Pancreatitis in Low Resource Setting

Authors: Md. Shakhawat Hossain, Jimma Hossain, Md. Naushad Ali

Abstract:

Acute pancreatitis is a dangerous medical emergency in the practice of gastroenterology. Management of acute pancreatitis needs multidisciplinary approach with support starts from emergency to ICU. So, there is a chance of mismanagement in every steps, especially in low resource settings. Other factors such as patient’s financial condition, education, social custom, transport facility, referral system from periphery may also challenge the current guidelines for management. The present study is intended to determine the clinico-pathological profile, severity assessment and challenges of management of acute pancreatitis in a government laid tertiary care hospital to image the real scenario of management in a low resource place. A total 100 patients of acute pancreatitis were studied in this prospective study, held in the Department of Gastroenterology, Rangpur medical college hospital, Bangladesh from July 2017 to July 2018 within one year. Regarding severity, 85 % of the patients were mild, whereas 13 were moderately severe, and 2 had severe acute pancreatitis according to the revised Atlanta criteria. The most common etiologies of acute pancreatitis in our study were gall stone (15%) and biliary sludge (15%), whereas 54% were idiopathic. The most common challenges we faced were delay in hospital admission (59%) and delay in hospital diagnosis (20%). Others are non-adherence of patient party, and lack of investigation facility, physician’s poor knowledge about current guidelines. We were able to give early aggressive fluid to only 18% of patients as per current guideline. Conclusion: Management of acute pancreatitis as per guideline is challenging when optimum facility is lacking. So, modified guidelines for assessment and management of acute pancreatitis should be prepared for low resource setting.

Keywords: acute pancreatitis, challenges of management, severity, prognosis

Procedia PDF Downloads 100
2148 Consolidation Behavior of Lebanese Soil and Its Correlation with the Soil Parameters

Authors: Robert G. Nini

Abstract:

Soil consolidation is one of the biggest problem facing engineers. The consolidation process has an important role in settlement analysis for the embankments and footings resting on clayey soils. The settlement amount is related to the compression and the swelling indexes of the soil. Because the predominant upper soil layer in Lebanon is consisting mainly of clay, this layer is a real challenge for structural and highway engineering. To determine the effect of load and drainage on the engineering consolidation characteristics of Lebanese soil, a full experimental and synthesis study was conducted on different soil samples collected from many locations. This study consists of two parts. During the first part which is an experimental one, the Proctor test and the consolidation test were performed on the collected soil samples. After it, the identifications soil tests as hydrometer, specific gravity and Atterberg limits are done. The consolidation test which is the main test in this research is done by loading the soil for some days then an unloading cycle was applied. It takes two weeks to complete a typical consolidation test. Because of these reasons, during the second part of our research which is based on the analysis of the experiments results, some correlations were found between the main consolidation parameters as compression and swelling indexes with the other soil parameters easy to calculate. The results show that the compression and swelling indexes of Lebanese clays may be roughly estimated using a model involving one or two variables in the form of the natural void ratio and the Atterberg limits. These correlations have increasing importance for site engineers, and the proposed model also seems to be applicable to a wide range of clays worldwide.

Keywords: atterberg limits, clay, compression and swelling indexes, settlement, soil consolidation

Procedia PDF Downloads 105
2147 Evaluation and Preservation of Post-War Concrete Architecture: The Case of Lithuania

Authors: Aušra Černauskienė

Abstract:

The heritage of modern architecture is closely related to the materiality and technology used to implement the buildings. Concrete is one of the most ubiquitous post-war building materials with enormous aesthetic and structural potential that architects have creatively used for everyday buildings and exceptional architectural objects that have survived. Concrete's material, structural, and architectural development over the post-war years has produced a remarkably rich and diverse typology of buildings, for implementation of which unique handicraft skills and industrialized novelties were used. Nonetheless, in the opinion of the public, concrete architecture is often treated as ugly and obsolete, and in Lithuania, it also has negative associations with the scarcity of the Soviet era. Moreover, aesthetic non-appreciation is not the only challenge that concrete architecture meets. It also no longer meets the needs of contemporary requirements: buildings are of poor energy class, have little potential for transformation, and have an obsolete surrounding environment. Thus, as a young heritage, concrete architecture is not yet sufficiently appreciated by society and heritage specialists, as it takes a short time to rethink what they mean from a historical perspective. However, concrete architecture is considered ambiguous but has its character and specificity that needs to be carefully studied in terms of cultural heritage to avoid the risk of poor renovation or even demolition, which has increasingly risen in recent decades in Lithuania. For example, several valuable pieces of post-war concrete architecture, such as the Banga restaurant and the Summer Stage in Palanga, were demolished without understanding their cultural value. Many unique concrete structures and raw concrete surfaces were painted or plastered, paying little attention to the appearance of authentic material. Furthermore, it raises a discussion on how to preserve buildings of different typologies: for example, innovative public buildings in their aesthetic, spatial solutions, and mass housing areas built using precast concrete panels. It is evident that the most traditional preservation strategy, conservation, is not the only option for preserving post-war concrete architecture, and more options should be considered. The first step in choosing the right strategy in each case is an appropriate assessment of the cultural significance. For this reason, an evaluation matrix for post-war concrete architecture is proposed. In one direction, an analysis of different typological groups of buildings is suggested, with the designation of ownership rights; in the other direction – the analysis of traditional value aspects such as aesthetic, technological, and relevant for modern architecture such as social, economic, and sustainability factors. By examining these parameters together, three relevant scenarios for preserving post-war concrete architecture were distinguished: conservation, renovation, and reuse, and they are revealed using examples of concrete architecture in Lithuania.

Keywords: modern heritage, value aspects, typology, conservation, upgrade, reuse

Procedia PDF Downloads 105
2146 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 240
2145 Inertial Spreading of Drop on Porous Surfaces

Authors: Shilpa Sahoo, Michel Louge, Anthony Reeves, Olivier Desjardins, Susan Daniel, Sadik Omowunmi

Abstract:

The microgravity on the International Space Station (ISS) was exploited to study the imbibition of water into a network of hydrophilic cylindrical capillaries on time and length scales long enough to observe details hitherto inaccessible under Earth gravity. When a drop touches a porous medium, it spreads as if laid on a composite surface. The surface first behaves as a hydrophobic material, as liquid must penetrate pores filled with air. When contact is established, some of the liquid is drawn into pores by a capillarity that is resisted by viscous forces growing with length of the imbibed region. This process always begins with an inertial regime that is complicated by possible contact pinning. To study imbibition on Earth, time and distance must be shrunk to mitigate gravity-induced distortion. These small scales make it impossible to observe the inertial and pinning processes in detail. Instead, in the International Space Station (ISS), astronaut Luca Parmitano slowly extruded water spheres until they touched any of nine capillary plates. The 12mm diameter droplets were large enough for high-speed GX1050C video cameras on top and side to visualize details near individual capillaries, and long enough to observe dynamics of the entire imbibition process. To investigate the role of contact pinning, a text matrix was produced which consisted nine kinds of porous capillary plates made of gold-coated brass treated with Self-Assembled Monolayers (SAM) that fixed advancing and receding contact angles to known values. In the ISS, long-term microgravity allowed unambiguous observations of the role of contact line pinning during the inertial phase of imbibition. The high-speed videos of spreading and imbibition on the porous plates were analyzed using computer vision software to calculate the radius of the droplet contact patch with the plate and height of the droplet vs time. These observations are compared with numerical simulations and with data that we obtained at the ESA ZARM free-fall tower in Bremen with a unique mechanism producing relatively large water spheres and similarity in the results were observed. The data obtained from the ISS can be used as a benchmark for further numerical simulations in the field.

Keywords: droplet imbibition, hydrophilic surface, inertial phase, porous medium

Procedia PDF Downloads 107
2144 Association between G2677T/A MDR1 Polymorphism with the Clinical Response to Disease Modifying Anti-Rheumatic Drugs in Rheumatoid Arthritis

Authors: Alan Ruiz-Padilla, Brando Villalobos-Villalobos, Yeniley Ruiz-Noa, Claudia Mendoza-Macías, Claudia Palafox-Sánchez, Miguel Marín-Rosales, Álvaro Cruz, Rubén Rangel-Salazar

Abstract:

Introduction: In patients with rheumatoid arthritis, resistance or poor response to disease modifying antirheumatic drugs (DMARD) may be a reflection of the increase in g-P. The expression of g-P may be important in mediating the effluence of DMARD from the cell. In addition, P-glycoprotein is involved in the transport of cytokines, IL-1, IL-2 and IL-4, from normal lymphocytes activated to the surrounding extracellular matrix, thus influencing the activity of RA. The involvement of P-glycoprotein in the transmembrane transport of cytokines can serve as a modulator of the efficacy of DMARD. It was shown that a number of lymphocytes with glycoprotein P activity is increased in patients with RA; therefore, P-glycoprotein expression could be related to the activity of RA and could be a predictor of poor response to therapy. Objective: To evaluate in RA patients, if the G2677T/A MDR1 polymorphisms is associated with differences in the rate of therapeutic response to disease-modifying antirheumatic agents in patients with rheumatoid arthritis. Material and Methods: A prospective cohort study was conducted. Fifty seven patients with RA were included. They had an active disease according to DAS-28 (score >3.2). We excluded patients receiving biological agents. All the patients were followed during 6 months in order to identify the rate of therapeutic response according to the American College of Rheumatology (ACR) criteria. At the baseline peripheral blood samples were taken in order to identify the G2677T/A MDR1 polymorphisms using PCR- Specific allele. The fragment was identified by electrophoresis in polyacrylamide gels stained with ethidium bromide. For statistical analysis, the genotypic and allelic frequencies of MDR1 gene polymorphism between responders and non-responders were determined. Chi-square tests as well as, relative risks with 95% confidence intervals (95%CI) were computed to identify differences in the risk for achieving therapeutic response. Results: RA patients had a mean age of 47.33 ± 12.52 years, 87.7% were women with a mean for DAS-28 score of 6.45 ± 1.12. At the 6 months, the rate of therapeutic response was 68.7 %. The observed genotype frequencies were: for G/G 40%, T/T 32%, A/A 19%, G/T 7% and for A/A genotype 2%. Patients with G allele developed at 6 months of treatment, higher rate for therapeutic response assessed by ACR20 compared to patients with others alleles (p=0.039). Conclusions: Patients with G allele of the - G2677T/A MDR1 polymorphisms had a higher rate of therapeutic response at 6 months with DMARD. These preliminary data support the requirement for a deep evaluation of these and other genotypes as factors that may influence the therapeutic response in RA.

Keywords: pharmacogenetics, MDR1, P-glycoprotein, therapeutic response, rheumatoid arthritis

Procedia PDF Downloads 179
2143 Competing Risks Modeling Using within Node Homogeneity Classification Tree

Authors: Kazeem Adesina Dauda, Waheed Babatunde Yahya

Abstract:

To design a tree that maximizes within-node homogeneity, there is a need for a homogeneity measure that is appropriate for event history data with multiple risks. We consider the use of Deviance and Modified Cox-Snell residuals as a measure of impurity in Classification Regression Tree (CART) and compare our results with the results of Fiona (2008) in which homogeneity measures were based on Martingale Residual. Data structure approach was used to validate the performance of our proposed techniques via simulation and real life data. The results of univariate competing risk revealed that: using Deviance and Cox-Snell residuals as a response in within node homogeneity classification tree perform better than using other residuals irrespective of performance techniques. Bone marrow transplant data and double-blinded randomized clinical trial, conducted in other to compare two treatments for patients with prostate cancer were used to demonstrate the efficiency of our proposed method vis-à-vis the existing ones. Results from empirical studies of the bone marrow transplant data showed that the proposed model with Cox-Snell residual (Deviance=16.6498) performs better than both the Martingale residual (deviance=160.3592) and Deviance residual (Deviance=556.8822) in both event of interest and competing risks. Additionally, results from prostate cancer also reveal the performance of proposed model over the existing one in both causes, interestingly, Cox-Snell residual (MSE=0.01783563) outfit both the Martingale residual (MSE=0.1853148) and Deviance residual (MSE=0.8043366). Moreover, these results validate those obtained from the Monte-Carlo studies.

Keywords: within-node homogeneity, Martingale residual, modified Cox-Snell residual, classification and regression tree

Procedia PDF Downloads 240
2142 Comparison of Monte Carlo Simulations and Experimental Results for the Measurement of Complex DNA Damage Induced by Ionizing Radiations of Different Quality

Authors: Ifigeneia V. Mavragani, Zacharenia Nikitaki, George Kalantzis, George Iliakis, Alexandros G. Georgakilas

Abstract:

Complex DNA damage consisting of a combination of DNA lesions, such as Double Strand Breaks (DSBs) and non-DSB base lesions occurring in a small volume is considered as one of the most important biological endpoints regarding ionizing radiation (IR) exposure. Strong theoretical (Monte Carlo simulations) and experimental evidence suggests an increment of the complexity of DNA damage and therefore repair resistance with increasing linear energy transfer (LET). Experimental detection of complex (clustered) DNA damage is often associated with technical deficiencies limiting its measurement, especially in cellular or tissue systems. Our groups have recently made significant improvements towards the identification of key parameters relating to the efficient detection of complex DSBs and non-DSBs in human cellular systems exposed to IR of varying quality (γ-, X-rays 0.3-1 keV/μm, α-particles 116 keV/μm and 36Ar ions 270 keV/μm). The induction and processing of DSB and non-DSB-oxidative clusters were measured using adaptations of immunofluorescence (γH2AX or 53PB1 foci staining as DSB probes and human repair enzymes OGG1 or APE1 as probes for oxidized purines and abasic sites respectively). In the current study, Relative Biological Effectiveness (RBE) values for DSB and non-DSB induction have been measured in different human normal (FEP18-11-T1) and cancerous cell lines (MCF7, HepG2, A549, MO59K/J). The experimental results are compared to simulation data obtained using a validated microdosimetric fast Monte Carlo DNA Damage Simulation code (MCDS). Moreover, this simulation approach is implemented in two realistic clinical cases, i.e. prostate cancer treatment using X-rays generated by a linear accelerator and a pediatric osteosarcoma case using a 200.6 MeV proton pencil beam. RBE values for complex DNA damage induction are calculated for the tumor areas. These results reveal a disparity between theory and experiment and underline the necessity for implementing highly precise and more efficient experimental and simulation approaches.

Keywords: complex DNA damage, DNA damage simulation, protons, radiotherapy

Procedia PDF Downloads 284