Search results for: moment magnitude
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1622

Search results for: moment magnitude

1382 Direct-Displacement Based Design for Buildings with Non-Linear Viscous Dampers

Authors: Kelly F. Delgado-De Agrela, Sonia E. Ruiz, Marco A. Santos-Santiago

Abstract:

An approach is proposed for the design of regular buildings equipped with non-linear viscous dissipating devices. The approach is based on a direct-displacement seismic design method which satisfies seismic performance objectives. The global system involved is formed by structural regular moment frames capable of supporting gravity and lateral loads with elastic response behavior plus a set of non-linear viscous dissipating devices which reduce the structural seismic response. The dampers are characterized by two design parameters: (1) a positive real exponent α which represents the non-linearity of the damper, and (2) the damping coefficient C of the device, whose constitutive force-velocity law is given by F=Cvᵃ, where v is the velocity between the ends of the damper. The procedure is carried out using a substitute structure. Two limits states are verified: serviceability and near collapse. The reduction of the spectral ordinates by the additional damping assumed in the design process and introduced to the structure by the viscous non-linear dampers is performed according to a damping reduction factor. For the design of the non-linear damper system, the real velocity is considered instead of the pseudo-velocity. The proposed design methodology is applied to an 8-story steel moment frame building equipped with non-linear viscous dampers, located in intermediate soil zone of Mexico City, with a dominant period Tₛ = 1s. In order to validate the approach, nonlinear static analyses and nonlinear time history analyses are performed.

Keywords: based design, direct-displacement based design, non-linear viscous dampers, performance design

Procedia PDF Downloads 170
1381 Analysis and Quantification of Historical Drought for Basin Wide Drought Preparedness

Authors: Joo-Heon Lee, Ho-Won Jang, Hyung-Won Cho, Tae-Woong Kim

Abstract:

Drought is a recurrent climatic feature that occurs in virtually every climatic zone around the world. Korea experiences the drought almost every year at the regional scale mainly during in the winter and spring seasons. Moreover, extremely severe droughts at a national scale also occurred at a frequency of six to seven years. Various drought indices had developed as tools to quantitatively monitor different types of droughts and are utilized in the field of drought analysis. Since drought is closely related with climatological and topographic characteristics of the drought prone areas, the basins where droughts are frequently occurred need separate drought preparedness and contingency plans. In this study, an analysis using statistical methods was carried out for the historical droughts occurred in the five major river basins in Korea so that drought characteristics can be quantitatively investigated. It was also aimed to provide information with which differentiated and customized drought preparedness plans can be established based on the basin level analysis results. Conventional methods which quantifies drought execute an evaluation by applying a various drought indices. However, the evaluation results for same drought event are different according to different analysis technique. Especially, evaluation of drought event differs depend on how we view the severity or duration of drought in the evaluation process. Therefore, it was intended to draw a drought history for the most severely affected five major river basins of Korea by investigating a magnitude of drought that can simultaneously consider severity, duration, and the damaged areas by applying drought run theory with the use of SPI (Standardized Precipitation Index) that can efficiently quantifies meteorological drought. Further, quantitative analysis for the historical extreme drought at various viewpoints such as average severity, duration, and magnitude of drought was attempted. At the same time, it was intended to quantitatively analyze the historical drought events by estimating the return period by derived SDF (severity-duration-frequency) curve for the five major river basins through parametric regional drought frequency analysis. Analysis results showed that the extremely severe drought years were in the years of 1962, 1988, 1994, and 2014 in the Han River basin. While, the extreme droughts were occurred in 1982 and 1988 in the Nakdong river basin, 1994 in the Geumg basin, 1988 and 1994 in Youngsan river basin, 1988, 1994, 1995, and 2000 in the Seomjin river basin. While, the extremely severe drought years at national level in the Korean Peninsula were occurred in 1988 and 1994. The most damaged drought were in 1981~1982 and 1994~1995 which lasted for longer than two years. The return period of the most severe drought at each river basin was turned out to be at a frequency of 50~100 years.

Keywords: drought magnitude, regional frequency analysis, SPI, SDF(severity-duration-frequency) curve

Procedia PDF Downloads 375
1380 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 29
1379 Failure of Cable Reel Flat Spring of Crane: Beyond Fatigue Life Use

Authors: Urbi Pal, Piyas Palit, Jitendra Mathur, Abhay Chaturvedi, Sandip Bhattacharya

Abstract:

The hot rolled slab lifting crane cable reel drum (CRD) failed due to failure of cable reel flat spring which are inside the cassette of CRD. CRD is used for the movement of tong cable. Stereoscopic observation revealed beach marks and Scanning Electron Microscopy showed striations confirming fatigue mode of failure. Chemical composition should be spring steel (Cr-Mo-V) as per IS 3431:1982 instead of C-Mn steel. To find out the reason of fatigue failure, the theoretical fatigue life of flat spiral spring has been calculated. The calculation of number of fatigue cycles included bending moment, maximum stress on the spring, ultimate tensile strength and alternative stress. The bending moment determination has been taken account with various parameters like Young’s Modulus, width, thickness, outer diameter, arbor diameter, pay out the length and angular deflection in rotations. With all the required data, the calculated fatigue life turned to be 10000 cycles, but the spring served 15000 cycles which clearly indicated beyond fatigue life usage. Different UTS values have been plotted with respect to the number of fatigue cycles and clearly showed that the increase in UTS by 40% increases fatigue life by 50%. The significance of higher UTS lied here, and higher UTS depends on modified chemistry with proper tempered martensite microstructure. This kind of failure can be easily avoided by changing the crane spring maintenance schedule from 2 years to 1.5 years considering 600 cycles per month. The plant has changed changing the schedule of cable reel spring and procured new flat reel spring made of 50CrV2 steel.

Keywords: cable reel spring, fatigue life, stress, spring steel

Procedia PDF Downloads 125
1378 Vibration Based Damage Detection and Stiffness Reduction of Bridges: Experimental Study on a Small Scale Concrete Bridge

Authors: Mirco Tarozzi, Giacomo Pignagnoli, Andrea Benedetti

Abstract:

Structural systems are often subjected to degradation processes due to different kind of phenomena like unexpected loadings, ageing of the materials and fatigue cycles. This is true especially for bridges, in which their safety evaluation is crucial for the purpose of a design of planning maintenance. This paper discusses the experimental evaluation of the stiffness reduction from frequency changes due to uniform damage scenario. For this purpose, a 1:4 scaled bridge has been built in the laboratory of the University of Bologna. It is made of concrete and its cross section is composed by a slab linked to four beams. This concrete deck is 6 m long and 3 m wide, and its natural frequencies have been identified dynamically by exciting it with an impact hammer, a dropping weight, or by walking on it randomly. After that, a set of loading cycles has been applied to this bridge in order to produce a uniformly distributed crack pattern. During the loading phase, either cracking moment and yielding moment has been reached. In order to define the relationship between frequency variation and loss in stiffness, the identification of the natural frequencies of the bridge has been performed, before and after the occurrence of the damage, corresponding to each load step. The behavior of breathing cracks and its effect on the natural frequencies has been taken into account in the analytical calculations. By using a sort of exponential function given from the study of lot of experimental tests in the literature, it has been possible to predict the stiffness reduction through the frequency variation measurements. During the load test also crack opening and middle span vertical displacement has been monitored.

Keywords: concrete bridge, damage detection, dynamic test, frequency shifts, operational modal analysis

Procedia PDF Downloads 159
1377 Revitalization of the Chinese Residential at Lasem, Indonesia

Authors: Nurtati Soewarno, Dian Duhita

Abstract:

The existence of civilization from the past is recognized by the left objects such as monuments, buildings or even a town. The relics were designed and made well, using the good quality material so it could persist a long period of time. At this moment, those relics are cultural heritage that must be preserved and the authenticity maintained. Indonesia, a country consist of various tribes with many cultural heritages, one of them is the city of Lasem. Lasem city lies in the northern part of Central Java since the Majapahit kingdom era (13th century) poses as a busy harbor city and a trading center. Lasem is one of the residences of Chinese immigrants in Java, seen by the domination of Chinese architectural building styles. The residential was built since the 15th century and the building has the courtyard which is different from other China’s building in another part of Java. This city loses ground since the trade activity experience difficulties during the Japanese colonial era and continues after the Indonesian independence time. Many Chinese people left Lasem city and let the buildings empty not maintained. This paper will present the result of observation to Chinese architectural style buildings in Lasem city which still hold out until this moment. Using typo morphology method, the case study is chosen based on the transformation type. The occurring transformation is parallel with adaptive reuse concept as an effort to revitalize the existence of the buildings. With this concept, it is expected that the buildings could be re functioned and the glory of the foretime Lasem city could be experienced again. Intervention from the local government is expected, issuing regulations, hoping the new building functions won’t ruin the cultural heritage but instead beautifies it.

Keywords: adaptive re-use, brown field area, building transformation, Lasem city

Procedia PDF Downloads 343
1376 Scalable and Accurate Detection of Pathogens from Whole-Genome Shotgun Sequencing

Authors: Janos Juhasz, Sandor Pongor, Balazs Ligeti

Abstract:

Next-generation sequencing, especially whole genome shotgun sequencing, is becoming a common approach to gain insight into the microbiomes in a culture-independent way, even in clinical practice. It does not only give us information about the species composition of an environmental sample but opens the possibility to detect antimicrobial resistance and novel, or currently unknown, pathogens. Accurately and reliably detecting the microbial strains is a challenging task. Here we present a sensitive approach for detecting pathogens in metagenomics samples with special regard to detecting novel variants of known pathogens. We have developed a pipeline that uses fast, short read aligner programs (i.e., Bowtie2/BWA) and comprehensive nucleotide databases. Taxonomic binning is based on the lowest common ancestor (LCA) principle; each read is assigned to a taxon, covering the most significantly hit taxa. This approach helps in balancing between sensitivity and running time. The program was tested both on experimental and synthetic data. The results implicate that our method performs as good as the state-of-the-art BLAST-based ones, furthermore, in some cases, it even proves to be better, while running two orders magnitude faster. It is sensitive and capable of identifying taxa being present only in small abundance. Moreover, it needs two orders of magnitude less reads to complete the identification than MetaPhLan2 does. We analyzed an experimental anthrax dataset (B. anthracis strain BA104). The majority of the reads (96.50%) was classified as Bacillus anthracis, a small portion, 1.2%, was classified as other species from the Bacillus genus. We demonstrate that the evaluation of high-throughput sequencing data is feasible in a reasonable time with good classification accuracy.

Keywords: metagenomics, taxonomy binning, pathogens, microbiome, B. anthracis

Procedia PDF Downloads 108
1375 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe

Authors: Elsadig Naseraddeen Ahmed Mohamed

Abstract:

In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.

Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon

Procedia PDF Downloads 142
1374 Suitability of Direct Strength Method-Based Approach for Web Crippling Strength of Flange Fastened Cold-Formed Steel Channel Beams Subjected to Interior Two-Flange Loading: A Comprehensive Investigation

Authors: Hari Krishnan K. P., Anil Kumar M. V.

Abstract:

The Direct Strength Method (DSM) is used for the computation of the design strength of members whose behavior is governed by any form of buckling. DSM based semiempirical equations have been successfully used for cold-formed steel (CFS) members subjected to compression, bending, and shear. The DSM equations for the strength of a CFS member are based on the parameters accounting for strength [yield load (Py), yield moment (My), and shear yield load (Vy) for compression, bending, and shear respectively] and stability [buckling load (Pcr), buckling moment (Mcr), and shear buckling load (Vcr) for compression, bending and shear respectively]. The buckling of column and beam shall be governed by local, distortional, or global buckling modes and their interaction. Recently DSM-based methods are extended for the web crippling strength of CFS beams also. Numerous DSM-based expressions were reported in the literature, which is the function of loading case, cross-section shape, and boundary condition. Unlike members subjected to axial load, bending, or shear, no unified expression for the design web crippling strength irrespective of the loading case, cross-section shape, and end boundary conditions are available yet. This study, based on nonlinear finite element analysis results, shows that the slenderness of the web, which shall be represented either using web height to thickness ratio (h=t) or Pcr has negligible contribution to web crippling strength. Hence, the results in this paper question the suitability of DSM based approach for the web crippling strength of CFS beams.

Keywords: cold-formed steel, beams, DSM-based procedure, interior two flanged loading, web crippling

Procedia PDF Downloads 60
1373 A Step Magnitude Haptic Feedback Device and Platform for Better Way to Review Kinesthetic Vibrotactile 3D Design in Professional Training

Authors: Biki Sarmah, Priyanko Raj Mudiar

Abstract:

In the modern world of remotely interactive virtual reality-based learning and teaching, including professional skill-building training and acquisition practices, as well as data acquisition and robotic systems, the revolutionary application or implementation of field-programmable neurostimulator aids and first-hand interactive sensitisation techniques into 3D holographic audio-visual platforms have been a coveted dream of many scholars, professionals, scientists, and students. Integration of 'kinaesthetic vibrotactile haptic perception' along with an actuated step magnitude contact profiloscopy in augmented reality-based learning platforms and professional training can be implemented by using an extremely calculated and well-coordinated image telemetry including remote data mining and control technique. A real-time, computer-aided (PLC-SCADA) field calibration based algorithm must be designed for the purpose. But most importantly, in order to actually realise, as well as to 'interact' with some 3D holographic models displayed over a remote screen using remote laser image telemetry and control, all spatio-physical parameters like cardinal alignment, gyroscopic compensation, as well as surface profile and thermal compositions, must be implemented using zero-order type 1 actuators (or transducers) because they provide zero hystereses, zero backlashes, low deadtime as well as providing a linear, absolutely controllable, intrinsically observable and smooth performance with the least amount of error compensation while ensuring the best ergonomic comfort ever possible for the users.

Keywords: haptic feedback, kinaesthetic vibrotactile 3D design, medical simulation training, piezo diaphragm based actuator

Procedia PDF Downloads 126
1372 Lateral Torsional Buckling Resistance of Trapezoidally Corrugated Web Girders

Authors: Annamária Käferné Rácz, Bence Jáger, Balázs Kövesdi, László Dunai

Abstract:

Due to the numerous advantages of steel corrugated web girders, its application field is growing for bridges as well as for buildings. The global stability behavior of such girders is significantly larger than those of conventional I-girders with flat web, thus the application of the structural steel material can be significantly reduced. Design codes and specifications do not provide clear and complete rules or recommendations for the determination of the lateral torsional buckling (LTB) resistance of corrugated web girders. Therefore, the authors made a thorough investigation regarding the LTB resistance of the corrugated web girders. Finite element (FE) simulations have been performed to develop new design formulas for the determination of the LTB resistance of trapezoidally corrugated web girders. FE model is developed considering geometrical and material nonlinear analysis using equivalent geometric imperfections (GMNI analysis). The equivalent geometric imperfections involve the initial geometric imperfections and residual stresses coming from rolling, welding and flame cutting. Imperfection sensitivity analysis was performed to determine the necessary magnitudes regarding only the first eigenmodes shape imperfections. By the help of the validated FE model, an extended parametric study is carried out to investigate the LTB resistance for different trapezoidal corrugation profiles. First, the critical moment of a specific girder was calculated by FE model. The critical moments from the FE calculations are compared to the previous analytical calculation proposals. Then, nonlinear analysis was carried out to determine the ultimate resistance. Due to the numerical investigations, new proposals are developed for the determination of the LTB resistance of trapezoidally corrugated web girders through a modification factor on the design method related to the conventional flat web girders.

Keywords: corrugated web, lateral torsional buckling, critical moment, FE modeling

Procedia PDF Downloads 264
1371 Determinants of Quality of Life in Patients with Atypical Prarkinsonian Syndromes: 1-Year Follow-Up Study

Authors: Tatjana Pekmezovic, Milica Jecmenica-Lukic, Igor Petrovic, Vladimir Kostic

Abstract:

Background: A group of atypical parkinsonian syndromes (APS) includes a variety of rare neurodegenerative disorders characterized by reduced life expectancy, increasing disability, and considerable impact on health-related quality of life (HRQoL). Aim: In this study we wanted to answer two questions: a) which demographic and clinical factors are main contributors of HRQoL in our cohort of patients with APS, and b) how does quality of life of these patients change over 1-year follow-up period. Patients and Methods: We conducted a prospective cohort study in hospital settings. The initial study comprised all consecutive patients who were referred to the Department of Movement Disorders, Clinic of Neurology, Clinical Centre of Serbia, Faculty of Medicine, University of Belgrade (Serbia), from January 31, 2000 to July 31, 2013, with the initial diagnoses of ‘Parkinson’s disease’, ‘parkinsonism’, ‘atypical parkinsonism’ and ‘parkinsonism plus’ during the first 8 months from the appearance of first symptom(s). The patients were afterwards regularly followed in 4-6 month intervals and eventually the diagnoses were established for 46 patients fulfilling the criteria for clinically probable progressive supranuclear palsy (PSP) and 36 patients for probable multiple system atrophy (MSA). The health-related quality of life was assessed by using the SF-36 questionnaire (Serbian translation). Hierarchical multiple regression analysis was conducted to identify predictors of composite scores of SF-36. The importance of changes in quality of life scores of patients with APS between baseline and follow-up time-point were quantified using Wilcoxon Signed Ranks Test. The magnitude of any differences for the quality of life changes was calculated as an effect size (ES). Results: The final models of hierarchical regression analysis showed that apathy measured by the Apathy evaluation scale (AES) score accounted for 59% of the variance in the Physical Health Composite Score of SF-36 and 14% of the variance in the Mental Health Composite Score of SF-36 (p<0.01). The changes in HRQoL were assessed in 52 patients with APS who completed 1-year follow-up period. The analysis of magnitude for changes in HRQoL during one-year follow-up period have shown sustained medium ES (0.50-0.79) for both Physical and Mental health composite scores, total quality of life as well as for the Physical Health, Vitality, Role Emotional and Social Functioning. Conclusion: This study provides insight into new potential predictors of HRQoL and its changes over time in patients with APS. Additionally, identification of both prognostic markers of a poor HRQoL and magnitude of its changes should be considered when developing comprehensive treatment-related strategies and health care programs aimed at improving HRQoL and well-being in patients with APS.

Keywords: atypical parkinsonian syndromes, follow-up study, quality of life, APS

Procedia PDF Downloads 280
1370 A New Criterion for Removal of Fouling Deposit

Authors: D. Bäcker, H. Chaves

Abstract:

The key to improve surface cleaning of the fouling is understanding of the mechanism of separation process of the deposit from the surface. The authors give basic principles of characterization of separation process and introduce a corresponding criterion. The developed criterion is a measure for the moment of separation of the deposit from the surface. For this purpose a new measurement technique is described.

Keywords: cleaning, fouling, separation, criterion

Procedia PDF Downloads 430
1369 Use of Smartwatches for the Emotional Self-Regulation of Individuals with Autism Spectrum Disorder (ASD)

Authors: Juan C. Torrado, Javier Gomez, Guadalupe Montero, German Montoro, M. Dolores Villalba

Abstract:

One of the most challenging aspects of the executive dysfunction of people with Autism Spectrum Disorders is the behavior control. This is related to a deficit in their ability to regulate, recognize and manage their own emotions. Some researchers have developed applications for tablets and smartphones to practice strategies of relaxation and emotion recognition. However, they cannot be applied to the very moment of temper outbursts, anger episodes or anxiety, since they require to carry the device, start the application and be helped by caretakers. Also, some of these systems are developed for either obsolete technologies (old versions of tablet devices, PDAs, outdated operative systems of smartphones) or specific devices (self-developed or proprietary ones) that create differentiation between the users and the rest of the individuals in their context. For this project we selected smartwatches. Focusing on emergent technologies ensures a wide lifespan of the developed products, because the derived products are intended to be available in the same moment the very technology gets popularized, not later. We also focused our research in commercial versions of smartwatches, since this way differentiation is easily avoided, so the users’ abandonment rate lowers. We have developed a smartwatch system along with a smartphone authoring tool to display self-regulation strategies. These micro-prompting strategies are conformed of pictograms, animations and temporizers, and they are designed by means of the authoring tool: When both devices synchronize their data, the smartwatch holds the self-regulation strategies, which are triggered when the smartwatch sensors detect a remarkable rise of heart rate and movement. The system is being currently tested in an educational center of people with ASD of Madrid, Spain.

Keywords: assistive technologies, emotion regulation, human-computer interaction, smartwatches

Procedia PDF Downloads 269
1368 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 102
1367 Quantification of Factors Contributing to Wave-In-Deck on Fixed Jacket Platforms

Authors: C. Y. Ng, A. M. Johan, A. E. Kajuputra

Abstract:

Wave-in-deck phenomenon for fixed jacket platforms at shallow water condition has been reported as a notable risk to the workability and reliability of the platform. Reduction in reservoir pressure, due to the extraction of hydrocarbon for an extended period of time, has caused the occurrence of seabed subsidence. Platform experiencing subsidence promotes reduction of air gaps, which eventually allows the waves to attack the bottom decks. The impact of the wave-in-deck generates additional loads to the structure and therefore increases the values of the moment arms. Higher moment arms trigger instability in terms of overturning, eventually decreases the reserve strength ratio (RSR) values of the structure. The mechanics of wave-in-decks, however, is still not well understood and have not been fully incorporated into the design codes and standards. Hence, it is necessary to revisit the current design codes and standards for platform design optimization. The aim of this study is to evaluate the effects of RSR due to wave-in-deck on four-legged jacket platforms in Malaysia. Base shear values with regards to calibration and modifications of wave characteristics were obtained using SESAM GeniE. Correspondingly, pushover analysis is conducted using USFOS to retrieve the RSR. The effects of the contributing factors i.e. the wave height, wave period and water depth with regards to the RSR and base shear values were analyzed and discussed. This research proposal is important in optimizing the design life of the existing and aging offshore structures. Outcomes of this research are expected to provide a proper evaluation of the wave-in-deck mechanics and in return contribute to the current mitigation strategies in managing the issue.

Keywords: wave-in-deck loads, wave effects, water depth, fixed jacket platforms

Procedia PDF Downloads 405
1366 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 238
1365 Non-Linear Velocity Fields in Turbulent Wave Boundary Layer

Authors: Shamsul Chowdhury

Abstract:

The objective of this paper is to present the detailed analysis of the turbulent wave boundary layer produced by progressive finite-amplitude waves theory. Most of the works have done for the mass transport in the turbulent boundary layer assuming the eddy viscosity is not time varying, where the sediment movement is induced by the mean velocity. Near the ocean bottom, the waves produce a thin turbulent boundary layer, where the flow is highly rotational, and shear stress associated with the fluid motion cannot be neglected. The magnitude and the predominant direction of the sediment transport near the bottom are known to be closely related to the flow in the wave induced boundary layer. The magnitude of water particle velocity at the Crest phase differs from the one of the Trough phases due to the non-linearity of the waves, which plays an important role to determine the sediment movement. The non-linearity of the waves become predominant in the surf zone area, where the sediment movement occurs vigorously. Therefore, in order to describe the flow near the bottom and relationship between the flow and the movement of the sediment, the analysis was done using the non-linear boundary layer equation and the finite amplitude wave theory was applied to represent the velocity fields in the turbulent wave boundary layer. At first, the calculation was done for turbulent wave boundary layer by two-dimensional model where throughout the calculation is non-linear. But Stokes second order wave profile is adopted at the upper boundary. The calculated profile was compared with the experimental data. Finally, the calculation is done based on various modes of the velocity and turbulent energy. The mean velocity is found to differ from condition of the relative depth and the roughness. It is also found that due to non-linearity, the absolute value for velocity and turbulent energy as well as Reynolds stress are asymmetric. The mean velocity of the laminar boundary layer is always positive but in the turbulent boundary layer plays a very complicated role.

Keywords: wave boundary, mass transport, mean velocity, shear stress

Procedia PDF Downloads 237
1364 Shear Strength of Reinforced Web Openings in Steel Beams

Authors: K. S. Sivakumaran, Bo Chen

Abstract:

The floor beams of steel buildings, cold-formed steel floor joists, in particular, often require large web openings, which may affect their shear capacities. A cost effective way to mitigate the detrimental effects of such openings is to weld/fasten reinforcements. A difficulty associated with an experimental investigation to establish suitable reinforcement schemes for openings in shear zone is that moment always coexists with the shear, and thus, it is impossible to create pure shear state in experiments, resulting in moment influenced results. However, finite element analysis can be conveniently used to investigate the pure shear behaviour of webs including webs with reinforced opening. This paper presents that the details associated with the finite element analysis of thick/thin-plates (representing the web of hot-rolled steel beam, and the web of a cold-formed steel member) having a large reinforced openings. The study considered thin simply supported rectangular plates subjected to inplane shear loadings until failure (including post-buckling behaviour). The plate was modelled using geometrically non-linear quadrilateral shell elements, and non-linear stress-strain relationship based on experiments. Total Lagrangian (TL) with large displacement/small strain formulation was used for such analysis. The model also considered the initial geometric imperfections. This study considered three reinforcement schemes, namely, flat, lip, and angle reinforcements. This paper discusses the modelling considerations and presents the results associated with the various reinforcement schemes under consideration. The paper briefly compares the analysis results with the experimental results.

Keywords: cold-formed steel, finite element analysis, opening, reinforcement, shear resistance

Procedia PDF Downloads 256
1363 Effect of Two Types of Shoe Insole on the Dynamics of Lower Extremities Joints in Individuals with Leg Length Discrepancy during Stance Phase of Walking

Authors: Mansour Eslami, Fereshte Habibi

Abstract:

Limb length discrepancy (LLD), or anisomeric, is defined as a condition in which paired limbs are noticeably unequal. Individuals with LLD during walking use compensatory mechanisms to dynamically lengthen the short limb and shorten the long limb to minimize the displacement of the body center of mass and consequently reduce body energy expenditure. Due to the compensatory movements created, LLD greater than 1 cm increases the odds of creating lumbar problems and hip and knee osteoarthritis. Insoles are non-surgical therapies that are recommended to improve the walking pattern, pain and create greater symmetry between the two lower limbs. However, it is not yet clear what effect insoles have on the variables related to injuries during walking. The aim of the present study was to evaluate the effect of internal and external heel lift insoles on pelvic kinematic in sagittal and frontal planes and lower extremity joint moments in individuals with mild leg length discrepancy during the stance phase of walking. Biomechanical data of twenty-eight men with structural leg length discrepancy of 10-25 mm were collected while they walked under three conditions: shoes without insole (SH), with internal heel lift insoles (IHLI) in shoes, and with external heal lift insole (EHLI). The tests were performed for both short and long legs. The pelvic kinematic and joint moment were measured with a motion capture system and force plate. Five walking trials were performed for each condition. The average value of five successful trials was used for further statistical analysis. Repeated measures ANCOVA with Bonferroni post hoc test were used for between-group comparisons (p ≤ 0.05). In both internal and external heel lift insoles (IHLI, EHLI), there was a significant decrease in the peak values of lateral and anterior pelvic tilts of the long leg, hip, and knee moments of a long leg and ankle moment of short leg (p ≤ 0.05). Furthermore, significant increases in peak values of lateral and anterior pelvic tilt of short leg in IHLI and EHLI were observed as compared to Shoe (SH) condition (p ≤ 0.01). In addition, a significant difference was observed between the IHLI and EHLI conditions in peak anterior pelvic tilt of long leg and plantar flexor moment of short leg (p=0.04; p= 0.04 respectively). Our findings indicate that both IHLI and EHLI can play an important role in controlling excessive pelvic movements in the sagittal and frontal planes in individuals with mild LLD during walking. Furthermore, the EHLI may have a better effect in preventing musculoskeletal injuries compared to the IHLI.

Keywords: kinematic, leg length discrepancy, shoe insole, walking

Procedia PDF Downloads 93
1362 Development of Electronic Waste Management Framework at College of Design Art, Design and Technology

Authors: Wafula Simon Peter, Kimuli Nabayego Ibtihal, Nabaggala Kimuli Nashua

Abstract:

The worldwide use of information and communications technology (ICT) equipment and other electronic equipment is growing and consequently, there is a growing amount of equipment that becomes waste after its time in use. This growth is expected to accelerate since equipment lifetime decreases with time and growing consumption. As a result, e-waste is one of the fastest-growing waste streams globally. The United Nations University (UNU) calculates in its second Global E-waste Monitor 44.7 million metric tonnes (Mt) of e-waste were generated globally in 2016. The study population was 80 respondents, from which a sample of 69 respondents was selected using simple and purposive sampling techniques. This research was carried out to investigate the problem of e-waste and come up with a framework to improve e-waste management. The objective of the study was to develop a framework for improving e-waste management at the College of Engineering, Design, Art and Technology (CEDAT). This was achieved by breaking it down into specific objectives, and these included the establishment of the policy and other Regulatory frameworks being used in e-waste management at CEDAT, the determination of the effectiveness of the e-waste management practices at CEDAT, the establishment of the critical challenges constraining e-waste management at the College, development of a framework for e-waste management. The study reviewed the e-waste regulatory framework used at the college and then collected data which was used to come up with a framework. The study also established that weak policy and regulatory framework, lack of proper infrastructure, improper disposal of e-waste and a general lack of awareness of the e-waste and the magnitude of the problem are the critical challenges of e-waste management. In conclusion, the policy and regulatory framework should be revised, localized and strengthened to contextually address the problem. Awareness campaigns, the development of proper infrastructure and extensive research to establish the volumes and magnitude of the problems will come in handy. The study recommends a framework for the improvement of e-waste.

Keywords: e-waste, treatment, disposal, computers, model, management policy and guidelines

Procedia PDF Downloads 52
1361 The Impact of Electronic Marketing on the Quality Banking Services

Authors: Ahmed Ghalem

Abstract:

The research to be explained is a collection of information about several public and private economic institutions. This information is represented in highlighting the large and useful role in adopting the method of electronic marketing. Which is widespread and easy to use among community members at the local and international levels. Which generates large sums of money with little effort and little time, and also satisfies the customers. Do these things, despite what we have said, run the risk of losing large amounts of money in a moment or a short time.

Keywords: economic, finance, bank, development, marketing

Procedia PDF Downloads 57
1360 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission

Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires

Abstract:

Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.

Keywords: greenhouse gases, human development, mitigation, intensive energy activities

Procedia PDF Downloads 294
1359 Seismic Hazard Assessment of Tehran

Authors: Dorna Kargar, Mehrasa Masih

Abstract:

Due to its special geological and geographical conditions, Iran has always been exposed to various natural hazards. Earthquake is one of the natural hazards with random nature that can cause significant financial damages and casualties. This is a serious threat, especially in areas with active faults. Therefore, considering the population density in some parts of the country, locating and zoning high-risk areas are necessary and significant. In the present study, seismic hazard assessment via probabilistic and deterministic method for Tehran, the capital of Iran, which is located in Alborz-Azerbaijan province, has been done. The seismicity study covers a range of 200 km from the north of Tehran (X=35.74° and Y= 51.37° in LAT-LONG coordinate system) to identify the seismic sources and seismicity parameters of the study region. In order to identify the seismic sources, geological maps at the scale of 1: 250,000 are used. In this study, we used Kijko-Sellevoll's method (1992) to estimate seismicity parameters. The maximum likelihood estimation of earthquake hazard parameters (maximum regional magnitude Mmax, activity rate λ, and the Gutenberg-Richter parameter b) from incomplete data files is extended to the case of uncertain magnitude values. By the combination of seismicity and seismotectonic studies of the site, the acceleration with antiseptic probability may happen during the useful life of the structure is calculated with probabilistic and deterministic methods. Applying the results of performed seismicity and seismotectonic studies in the project and applying proper weights in used attenuation relationship, maximum horizontal and vertical acceleration for return periods of 50, 475, 950 and 2475 years are calculated. Horizontal peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.12g, 0.30g, 0.37g and 0.50, and Vertical peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.08g, 0.21g, 0.27g and 0.36g.

Keywords: peak ground acceleration, probabilistic and deterministic, seismic hazard assessment, seismicity parameters

Procedia PDF Downloads 44
1358 Software Engineering Revolution Driven by Complexity Science

Authors: Jay Xiong, Li Lin

Abstract:

This paper introduces a new software engineering paradigm based on complexity science, called NSE (Nonlinear Software Engineering paradigm). The purpose of establishing NSE is to help software development organizations double their productivity, half their cost, and increase the quality of their products in several orders of magnitude simultaneously. NSE complies with the essential principles of complexity science. NSE brings revolutionary changes to almost all aspects in software engineering. NSE has been fully implemented with its support platform Panorama++.

Keywords: complexity science, software development, software engineering, software maintenance

Procedia PDF Downloads 240
1357 Finite Element-Based Stability Analysis of Roadside Settlements Slopes from Barpak to Yamagaun through Laprak Village of Gorkha, an Epicentral Location after the 7.8Mw 2015 Barpak, Gorkha, Nepal Earthquake

Authors: N. P. Bhandary, R. C. Tiwari, R. Yatabe

Abstract:

The research employs finite element method to evaluate the stability of roadside settlements slopes from Barpak to Yamagaon through Laprak village of Gorkha, Nepal after the 7.8Mw 2015 Barpak, Gorkha, Nepal earthquake. It includes three major villages of Gorkha, i.e., Barpak, Laprak and Yamagaun that were devastated by 2015 Gorkhas’ earthquake. The road head distance from the Barpak to Laprak and Laprak to Yamagaun are about 14 and 29km respectively. The epicentral distance of main shock of magnitude 7.8 and aftershock of magnitude 6.6 were respectively 7 and 11 kilometers (South-East) far from the Barpak village nearer to Laprak and Yamagaon. It is also believed that the epicenter of the main shock as said until now was not in the Barpak village, it was somewhere near to the Yamagaun village. The chaos that they had experienced during the earthquake in the Yamagaun was much more higher than the Barpak. In this context, we have carried out a detailed study to investigate the stability of Yamagaun settlements slope as a case study, where ground fissures, ground settlement, multiple cracks and toe failures are the most severe. In this regard, the stability issues of existing settlements and proposed road alignment, on the Yamagaon village slope are addressed, which is surrounded by many newly activated landslides. Looking at the importance of this issue, field survey is carried out to understand the behavior of ground fissures and multiple failure characteristics of the slopes. The results suggest that the Yamgaun slope in Profile 2-2, 3-3 and 4-4 are not safe enough for infrastructure development even in the normal soil slope conditions as per 2, 3 and 4 material models; however, the slope seems quite safe for at Profile 1-1 for all 4 material models. The result also indicates that the first three profiles are marginally safe for 2, 3 and 4 material models respectively. The Profile 4-4 is not safe enough for all 4 material models. Thus, Profile 4-4 needs a special care to make the slope stable.

Keywords: earthquake, finite element method, landslide, stability

Procedia PDF Downloads 318
1356 Non Performing Asset Variations across Indian Commercial Banks: Some Findings

Authors: Sanskriti Singh, Ankit Tomar

Abstract:

Banks are the instrument of growth of a country. Banks mobilize the savings of the public in the form of deposits and channelize it as advances for various activities required for the development of society at large. The advance which becomes unpaid for a certain period is called Non Performing Asset of the bank. The study makes an attempt to bring out the magnitude of NPA and its impact on profit, advances. An attempt is also made to bring out the challenges NPA poses to the banks and suggestions to overcome and to manage NPA effectively.

Keywords: India, NPAs, private banks, public banks

Procedia PDF Downloads 258
1355 A Reusable Foundation Solution for Onshore Windmills

Authors: Wael Mohamed, Per-Erik Austrell, Ola Dahlblom

Abstract:

Wind farms repowering is a significant topic nowadays. Wind farms repowering means the complete dismantling of the existing turbine, tower and foundation at an existing site and replacing these units with taller and larger units. Modern wind turbines are designed to withstand approximately for 20~25 years. However, a very long design life of 100 years or more can be expected for high-quality concrete foundations. Based on that there are significant economic and environmental benefits of replacing the out-of-date wind turbine with a new turbine of better power generation capacity and reuse the foundation. The big difference in lifetime shows a potential for new foundation solution to allow wind farms to be updated with taller and larger units in order to increase the energy production. This also means a significant change in the design loads on the foundations. Therefore, the new foundation solution should be able to handle the additional overturning loads. A raft surrounded by an active stabilisation system is proposed in this study. The concept of an active stabilisation system is a novel idea using a movable load to stabilise against the overturning moment. The active stabilisation system consists of a water tank being divided into eight compartments. The system uses the water as a movable load by pumping it into two compartments to stabilise against the overturning moment. The position of the water will rely on the wind direction and a water movement system depending on a number of electric motors and pipes with electric valves is used. One of the advantages of this active foundation solution is that some cost-efficient adjustment could be done to make this foundation able to support larger and taller units. After the end of the first turbine lifetime, an option is presented here to reuse this foundation and make it able to support taller and larger units. This option is considered using extra water volume to fill four compartments instead of two compartments. This extra water volume will increase the stability moment by 41% compared to using water in two compartments. The geotechnical performance of the new foundation solution is investigated using two existing weak soil profiles in Egypt and Sweden. A comparative study of the new solution and a piled raft with long friction piles is performed using finite element simulations. The results show that using a raft surrounded by an active stabilisation system decreases the tilting compared to a piled raft with friction piles. Moreover, it is found that using a raft surrounded by an active stabilisation system decreases the foundation costs compared to a piled raft with friction piles. In term of the environmental impact, it is found that the new foundation has a beneficial impact on the CO2 emissions. It saves roughly from 296.1 tonnes-CO2 to 518.21 tonnes-CO2 from the manufacture of concrete if the new foundation solution is used for another turbine-lifetime.

Keywords: active stabilisation system, CO2 emissions, FE analysis, reusable, weak soils

Procedia PDF Downloads 192
1354 Investigation of the Corroded Steel Beam

Authors: Hesamaddin Khoshnoodi, Ahmad Rahbar Ranji

Abstract:

Corrosion in steel structures is one of the most important issues that should be considered in designing and constructing. Corrosion reduces the cross section and load capacity of element and leads to costly damage of structures. In this paper, the corrosion has been modeled for moment stresses. Moreover, the steel beam has been modeled using ABAQUS advanced finite element software. The conclusions of this study demonstrated that the displacement of the analyzed composite steel girder bridge might increase.

Keywords: Abaqus, Corrosion, deformation, Steel Beam

Procedia PDF Downloads 321
1353 Effects of Earthquake Induced Debris to Pedestrian and Community Street Network Resilience

Authors: Al-Amin, Huanjun Jiang, Anayat Ali

Abstract:

Reinforced concrete frames (RC), especially Ordinary RC frames, are prone to structural failures/collapse during seismic events, leading to a large proportion of debris from the structures, which obstructs adjacent areas, including streets. These blocked areas severely impede post-earthquake resilience. This study uses computational simulation (FEM) to investigate the amount of debris generated by the seismic collapse of an ordinary reinforced concrete moment frame building and its effects on the adjacent pedestrian and road network. A three-story ordinary reinforced concrete frame building, primarily designed for gravity load and earthquake resistance, was selected for analysis. Sixteen different ground motions were applied and scaled up until the total collapse of the tested building to evaluate the failure mode under various seismic events. Four types of collapse direction were identified through the analysis, namely aligned (positive and negative) and skewed (positive and negative), with aligned collapse being more predominant than skewed cases. The amount and distribution of debris around the collapsed building were assessed to investigate the interaction between collapsed buildings and adjacent street networks. An interaction was established between a building that collapsed in an aligned direction and the adjacent pedestrian walkway and narrow street located in an unplanned old city. The FEM model was validated against an existing shaking table test. The presented results can be utilized to simulate the interdependency between the debris generated from the collapse of seismic-prone buildings and the resilience of street networks. These findings provide insights for better disaster planning and resilient infrastructure development in earthquake-prone regions.

Keywords: building collapse, earthquake-induced debris, ORC moment resisting frame, street network

Procedia PDF Downloads 58