Search results for: stochastic geometry
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1539

Search results for: stochastic geometry

1149 Effect of Class V Cavity Configuration and Loading Situation on the Stress Concentration

Authors: Jia-Yu Wu, Chih-Han Chang, Shu-Fen Chuang, Rong-Yang Lai

Abstract:

Objective: This study was to examine the stress distribution of tooth with different class V restorations under different loading situations and geometry by 3D finite element (FE) analysis. `Methods: A series of FE models of mandibular premolars containing class V cavities were constructed using micro-CT. The class V cavities were assigned as the combinations of different cavity depths x occlusal -gingival heights: 1x2, 1x4, 2x2, and 2x4 mm. Three alveolar bone loss conditions were examined: 0, 1, and 2 mm. 200 N force was exerted on the buccal cusp tip under various directions (vertical, V; obliquely 30° angled, O; oblique and parallel the individual occlusal cavity wall, P). A 3-D FE analysis was performed and the von-Mises stress was used to summarize the data of stress distribution and maximum stress. Results: The maximal stress did not vary in different alveolar bone heights. For each geometry, the maximal stress was found at bilateral corners of the cavity. The peak stress of restorations was significantly higher under load P compared to those under loads V and O while the latter two were similar. 2x2mm cavity exhibited significantly increased (2.88 fold) stress under load P compared to that under load V, followed by 1x2mm (2.11 fold), 2x4mm (1.98 fold) and 1x4mm (1.1fold). Conclusion: Load direction causes the greatest impact on the results of stress, while the effect of alveolar bone loss is minor. Load direction parallel to the cavity wall may enhance the stress concentration especially in deep and narrow class cavities.

Keywords: class v restoration, finite element analysis, loading situation, stress

Procedia PDF Downloads 227
1148 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 143
1147 Synthesis, Characterization, Theoretical Crystal Structures and Antitubercular Activity Study of (E)-N'-(2,4-Dihydroxybenzylidene) Nicotinohydrazide and Some of Its Metal Complexes

Authors: Ogunniran Kehinde Olurotimi, Adekoya Joseph, Ehi-Eromosele Cyril, Mehdi Shihab, Mesubi Adediran, Tadigoppula Narender

Abstract:

Nicotinic acid hydrazide and 2,4-dihydoxylbenzaldehyde were condensed at 20°C to form an acylhydrazone (H3L) with ONO coordination pattern. The structure of the acylhydrazone was elucidated by using CHN analyzer, ESI mass spectrometry, IR, 1H NMR, 13C NMR and 2D NMR such as COSY and HSQC. Thereafter, five novel metal complexes [Mn(II), Fe(II), Pt(II) Zn(II) and Pd(II)] of the hydrazone ligand were synthesized and their structural characterization were achieved by several physicochemical methods, namely elemental analysis, electronic spectra, infrared, EPR, molar conductivity and powder X-ray diffraction studies. Structural geometries of some of the compounds were supported by using Hyper Chem-8 program for the molecular mechanics and semi-empirical calculations. The stability energy (E) and electron potentials (eV) for the frontier molecules were calculated by using PM3 method. An octahedral geometry was suggested for both Pd(II) and Zn(II) complexes while both Mn(II) and Fe(II) complexes conformed with tetrahedral pyramidal. However, Pt(II) complex agreed with tetrahedral geometry. In vitro antitubercular activity study of the ligand and the metal complexes were evaluated against Mycobacterium tuberculosis, H37Rv, by using micro-diluted method. The results obtained revealed that (PtL1) (MIC = 0.56 µg/mL), (ZnL1) (MIC = 0.61 µg/mL), (MnL1) (MIC = 0.71 µg/mL) and (FeL1) (MIC = 0.82 µg/mL), exhibited a significant activity when compared with first line drugs such as isoniazid (INH) (MIC = 0.9 µg/mL). H3L1 exhibited lesser antitubercular activity with MIC value of 1.02 µg/mL. However, the metal complexes displayed higher cytoxicity but were found to be non-significant different (P ˂ 0.05) to isoniazid drug.

Keywords: hydrazones, electron spin resonance, thermogravimetric, powder X-ray diffraction, antitubercular agents

Procedia PDF Downloads 243
1146 Isolated Iterating Fractal Independently Corresponds with Light and Foundational Quantum Problems

Authors: Blair D. Macdonald

Abstract:

After nearly one hundred years of its origin, foundational quantum mechanics remains one of the greatest unexplained mysteries in physicists today. Within this time, chaos theory and its geometry, the fractal, has developed. In this paper, the propagation behaviour with an iteration of a simple fractal, the Koch Snowflake, was described and analysed. From an arbitrary observation point within the fractal set, the fractal propagates forward by oscillation—the focus of this study and retrospectively behind by exponential growth from a point beginning. It propagates a potentially infinite exponential oscillating sinusoidal wave of discrete triangle bits sharing many characteristics of light and quantum entities. The model's wave speed is potentially constant, offering insights into the perception and a direction of time where, to an observer, when travelling at the frontier of propagation, time may slow to a stop. In isolation, the fractal is a superposition of component bits where position and scale present a problem of location. In reality, this problem is experienced within fractal landscapes or fields where 'position' is only 'known' by the addition of information or markers. The quantum' measurement problem', 'uncertainty principle,' 'entanglement,' and the classical-quantum interface are addressed; these are a problem of scale invariance associated with isolated fractality. Dual forward and retrospective perspectives of the fractal model offer the opportunity for unification between quantum mechanics and cosmological mathematics, observations, and conjectures. Quantum and cosmological problems may be different aspects of the one fractal geometry.

Keywords: measurement problem, observer, entanglement, unification

Procedia PDF Downloads 69
1145 Consideration of Uncertainty in Engineering

Authors: A. Mohammadi, M. Moghimi, S. Mohammadi

Abstract:

Engineers need computational methods which could provide solutions less sensitive to the environmental effects, so the techniques should be used which take the uncertainty to account to control and minimize the risk associated with design and operation. In order to consider uncertainty in engineering problem, the optimization problem should be solved for a suitable range of the each uncertain input variable instead of just one estimated point. Using deterministic optimization problem, a large computational burden is required to consider every possible and probable combination of uncertain input variables. Several methods have been reported in the literature to deal with problems under uncertainty. In this paper, different methods presented and analyzed.

Keywords: uncertainty, Monte Carlo simulated, stochastic programming, scenario method

Procedia PDF Downloads 388
1144 Determination of Aquifer Geometry Using Geophysical Methods: A Case Study from Sidi Bouzid Basin, Central Tunisia

Authors: Dhekra Khazri, Hakim Gabtni

Abstract:

Because of Sidi Bouzid water table overexploitation, this study aims at integrating geophysical methods to determinate aquifers geometry assessing their geological situation and geophysical characteristics. However in highly tectonic zones controlled by Atlassic structural features with NE-SW major directions (central Tunisia), Bouguer gravimetric responses of some areas can be as much dominated by the regional structural tendency, as being non-identified or either defectively interpreted such as the case of Sidi Bouzid basin. This issue required a residual gravity anomaly elaboration isolating the Sidi Bouzid basin gravity response ranging between -8 and -14 mGal and crucial for its aquifers geometry characterization. Several gravity techniques helped constructing the Sidi Bouzid basin's residual gravity anomaly, such as Upwards continuation compared to polynomial regression trends and power spectrum analysis detecting deep basement sources at (3km), intermediate (2km) and shallow sources (1km). A 3D Euler Deconvolution was also performed detecting deepest accidents trending NE-SW, N-S and E-W with depth values reaching 5500 m and delineating the main outcropping structures of the study area. Further gravity treatments highlighted the subsurface geometry and structural features of Sidi Bouzid basin over Horizontal and vertical gradient, and also filters based on them such as Tilt angle and Source Edge detector locating rooted edges or peaks from potential field data detecting a new E-W lineament compartmentalizing the Sidi Bouzid gutter into two unequally residual anomaly and subsiding domains. This subsurface morphology is also detected by the used 2D seismic reflection sections defining the Sidi Bouzid basin as a deep gutter within a tectonic set of negative flower structures, and collapsed and tilted blocks. Furthermore, these structural features were confirmed by forward gravity modeling process over several modeled residual gravity profiles crossing the main area. Sidi Bouzid basin (central Tunisia) is also of a big interest cause of the unknown total thickness and the undefined substratum of its siliciclastic Tertiary package, and its aquifers unbounded structural subsurface features and deep accidents. The Combination of geological, hydrogeological and geophysical methods is then of an ultimate need. Therefore, a geophysical methods integration based on gravity survey supporting available seismic data through forward gravity modeling, enhanced lateral and vertical extent definition of the basin's complex sedimentary fill via 3D gravity models, improved depth estimation by a depth to basement modeling approach, and provided 3D isochronous seismic mapping visualization of the basin's Tertiary complex refining its geostructural schema. A subsurface basin geomorphology mapping, over an ultimate matching between the basin's residual gravity map and the calculated theoretical signature map, was also displayed over the modeled residual gravity profiles. An ultimate multidisciplinary geophysical study of the Sidi Bouzid basin aquifers can be accomplished via an aeromagnetic survey and a 4D Microgravity reservoir monitoring offering temporal tracking of the target aquifer's subsurface fluid dynamics enhancing and rationalizing future groundwater exploitation in this arid area of central Tunisia.

Keywords: aquifer geometry, geophysics, 3D gravity modeling, improved depths, source edge detector

Procedia PDF Downloads 262
1143 Simulation of a Three-Link, Six-Muscle Musculoskeletal Arm Activated by Hill Muscle Model

Authors: Nafiseh Ebrahimi, Amir Jafari

Abstract:

The study of humanoid character is of great interest to researchers in the field of robotics and biomechanics. One might want to know the forces and torques required to move a limb from an initial position to the desired destination position. Inverse dynamics is a helpful method to compute the force and torques for an articulated body limb. It enables us to know the joint torques required to rotate a link between two positions. Our goal in this study was to control a human-like articulated manipulator for a specific task of path tracking. For this purpose, the human arm was modeled with a three-link planar manipulator activated by Hill muscle model. Applying a proportional controller, values of force and torques applied to the joints were calculated by inverse dynamics, and then joints and muscle forces trajectories were computed and presented. To be more accurate to say, the kinematics of the muscle-joint space was formulated by which we defined the relationship between the muscle lengths and the geometry of the links and joints. Secondary, the kinematic of the links was introduced to calculate the position of the end-effector in terms of geometry. Then, we considered the modeling of Hill muscle dynamics, and after calculation of joint torques, finally, we applied them to the dynamics of the three-link manipulator obtained from the inverse dynamics to calculate the joint states, find and control the location of manipulator’s end-effector. The results show that the human arm model was successfully controlled to take the designated path of an ellipse precisely.

Keywords: arm manipulator, hill muscle model, six-muscle model, three-link lodel

Procedia PDF Downloads 117
1142 Cell-Cell Interactions in Diseased Conditions Revealed by Three Dimensional and Intravital Two Photon Microscope: From Visualization to Quantification

Authors: Satoshi Nishimura

Abstract:

Although much information has been garnered from the genomes of humans and mice, it remains difficult to extend that information to explain physiological and pathological phenomena. This is because the processes underlying life are by nature stochastic and fluctuate with time. Thus, we developed novel "in vivo molecular imaging" method based on single and two-photon microscopy. We visualized and analyzed many life phenomena, including common adult diseases. We integrated the knowledge obtained, and established new models that will serve as the basis for new minimally invasive therapeutic approaches.

Keywords: two photon microscope, intravital visualization, thrombus, artery

Procedia PDF Downloads 352
1141 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery

Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko

Abstract:

In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analysed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realised via a two-way coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary lagrangian-eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analysed in the study. The axial velocity at normalised position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.

Keywords: Large Eddy Simulation, Fluid Structural Interaction, constricted artery, Computational Fluid Dynamics

Procedia PDF Downloads 272
1140 Stem Cell Fate Decision Depending on TiO2 Nanotubular Geometry

Authors: Jung Park, Anca Mazare, Klaus Von Der Mark, Patrik Schmuki

Abstract:

In clinical application of TiO2 implants on tooth and hip replacement, migration, adhesion and differentiation of neighboring mesenchymal stem cells onto implant surfaces are critical steps for successful bone regeneration. In a recent decade, accumulated attention has been paid on nanoscale electrochemical surface modifications on TiO2 layer for improving bone-TiO2 surface integration. We generated, on titanium surfaces, self-assembled layers of vertically oriented TiO2 nanotubes with defined diameters between 15 and 100 nm and here we show that mesenchymal stem cells finely sense TiO2 nanotubular geometry and quickly decide their cell fate either to differentiation into osteoblasts or to programmed cell death (apoptosis) on TiO2 nanotube layers. These cell fate decisions are critically dependent on nanotube size differences (15-100nm in diameters) of TiO2 nanotubes sensing by integrin clustering. We further demonstrate that nanoscale topography-sensing is feasible not only in mesenchymal stem cells but rather seems as generalized nanoscale microenvironment-cell interaction mechanism in several cell types composing bone tissue network including osteoblasts, osteoclast, endothelial cells and hematopoietic stem cells. Additionally we discuss the synergistic effect of simultaneous stimulation by nanotube-bound growth factor and nanoscale topographic cues on enhanced bone regeneration.

Keywords: TiO2 nanotube, stem cell fate decision, nano-scale microenvironment, bone regeneration

Procedia PDF Downloads 413
1139 Probabilistic Life Cycle Assessment of the Nano Membrane Toilet

Authors: A. Anastasopoulou, A. Kolios, T. Somorin, A. Sowale, Y. Jiang, B. Fidalgo, A. Parker, L. Williams, M. Collins, E. J. McAdam, S. Tyrrel

Abstract:

Developing countries are nowadays confronted with great challenges related to domestic sanitation services in view of the imminent water scarcity. Contemporary sanitation technologies established in these countries are likely to pose health risks unless waste management standards are followed properly. This paper provides a solution to sustainable sanitation with the development of an innovative toilet system, called Nano Membrane Toilet (NMT), which has been developed by Cranfield University and sponsored by the Bill & Melinda Gates Foundation. The particular technology converts human faeces into energy through gasification and provides treated wastewater from urine through membrane filtration. In order to evaluate the environmental profile of the NMT system, a deterministic life cycle assessment (LCA) has been conducted in SimaPro software employing the Ecoinvent v3.3 database. The particular study has determined the most contributory factors to the environmental footprint of the NMT system. However, as sensitivity analysis has identified certain critical operating parameters for the robustness of the LCA results, adopting a stochastic approach to the Life Cycle Inventory (LCI) will comprehensively capture the input data uncertainty and enhance the credibility of the LCA outcome. For that purpose, Monte Carlo simulations, in combination with an artificial neural network (ANN) model, have been conducted for the input parameters of raw material, produced electricity, NOX emissions, amount of ash and transportation of fertilizer. The given analysis has provided the distribution and the confidence intervals of the selected impact categories and, in turn, more credible conclusions are drawn on the respective LCIA (Life Cycle Impact Assessment) profile of NMT system. Last but not least, the specific study will also yield essential insights into the methodological framework that can be adopted in the environmental impact assessment of other complex engineering systems subject to a high level of input data uncertainty.

Keywords: sanitation systems, nano-membrane toilet, lca, stochastic uncertainty analysis, Monte Carlo simulations, artificial neural network

Procedia PDF Downloads 206
1138 Influence of Specimen Geometry (10*10*40), (12*12*60) and (5*20*120), on Determination of Toughness of Concrete Measurement of Critical Stress Intensity Factor: A Comparative Study

Authors: M. Benzerara, B. Redjel, B. Kebaili

Abstract:

The cracking of the concrete is a more crucial problem with the development of the complex structures related to technological progress. The projections in the knowledge of the breaking process make it possible today for better prevention of the risk of the fracture. The breaking strength brutal of a quasi-fragile material like the concrete called Toughness is measured by a breaking value of the factor of the intensity of the constraints K1C for which the crack is propagated, it is an intrinsic property of the material. Many studies reported in the literature treating of the concrete were carried out on specimens which are in fact inadequate compared to the intrinsic characteristic to identify. We started from this established fact, in order to compare the evolution of the parameter of toughness K1C measured by calling upon ordinary concrete specimens of three prismatic geometries different (10*10*40) Cm3, (12*12*60) Cm3 & (5*20*120) Cm3 containing from the side notches various depths simulating of the cracks was set up.The notches are carried out using triangular pyramidal plates into manufactured out of sheet coated placed at the center of the specimens at the time of the casting, then withdrawn to leave the trace of a crack. The tests are carried out in 3 points bending test in mode 1 of fracture, by using the techniques of mechanical fracture. The evolution of the parameter of toughness K1C measured with the three geometries specimens gives almost the same results. They are acceptable and return in the beach of the results determined by various researchers (toughness of the ordinary concrete turns to the turn of the 1 MPa √m). These results inform us about the presence of an economy on the level of the geometry specimen (5*20*120) Cm3, therefore, to use plates specimens later if one wants to master the toughness of this material complexes, astonishing but always essential that is the concrete.

Keywords: concrete, fissure, specimen, toughness

Procedia PDF Downloads 282
1137 Global Stability Of Nonlinear Itô Equations And N. V. Azbelev's W-method

Authors: Arcady Ponosov., Ramazan Kadiev

Abstract:

The work studies the global moment stability of solutions of systems of nonlinear differential Itô equations with delays. A modified regularization method (W-method) for the analysis of various types of stability of such systems, based on the choice of the auxiliaryequations and applications of the theory of positive invertible matrices, is proposed and justified. Development of this method for deterministic functional differential equations is due to N.V. Azbelev and his students. Sufficient conditions for the moment stability of solutions in terms of the coefficients for sufficiently general as well as specific classes of Itô equations are given.

Keywords: asymptotic stability, delay equations, operator methods, stochastic noise

Procedia PDF Downloads 196
1136 The Reason Why Al-Kashi’s Understanding of Islamic Arches Was Wrong

Authors: Amin Moradi, Maryam Moeini

Abstract:

It is a widely held view that Ghiyath al-Din Jamshid-e-Kashani, also known as al-Kashi (1380-1429 CE), was the first who played a significant role in the interaction between mathematicians and architects by introducing theoretical knowledge in Islamic architecture. In academic discourses, geometric rules extracted from his splendid volume titled as Key of Arithmetic has uncritically believed by historians of architecture to contemplate the whole process of arch design all throughout the Islamic buildings. His theories tried to solve the fundamental problem of structural design and to understand what makes an Islamic structure safe or unsafe. As a result, al-Kashi arrived at the conclusion that a safe state of equilibrium is achieved through a specific geometry as a rule. This paper reassesses the stability of al-Kashi's systematized principal forms to evaluate the logic of his hypothesis with a special focus on large spans. Besides the empirical experiences of the author in masonry constructions, the finite element approach was proposed considering the current standards in order to get a better understanding of the validity of geometric rules proposed by al-Kashi for the equilibrium conditions of Islamic masonry arches and vaults. The state of damage of his reference arches under loading condition confirms beyond any doubt that his conclusion of the geometrical configuration measured through his treaties present some serious operational limits and do not go further than some individualized mathematical hypothesis. Therefore, the nature of his mathematical studies regarding Islamic arches is in complete contradiction with the practical knowledge of construction methodology.

Keywords: Jamshid al-Kashani, Islamic architecture, Islamic geometry, construction equilibrium, collapse mechanism

Procedia PDF Downloads 109
1135 Determination of ILSS of Composite Materials Using Micromechanical FEA Analysis

Authors: K. Rana, H.A.Saeed, S. Zahir

Abstract:

Inter Laminar Shear Stress (ILSS) is a main key parameter which quantify the properties of composite materials. These properties can ascertain the use of material for a specific purpose like aerospace, automotive etc. A modelling approach for determination of ILSS is presented in this paper. Geometric modelling of composite material is performed in TEXGEN software where reinforcement, cured matrix and their interfaces are modelled separately as per actual geometry. Mechanical properties of matrix and reinforcements are modelled separately which incorporated anisotropy in the real world composite material. ASTM D2344 is modelled in ANSYS for ILSS. In macroscopic analysis model approximates the anisotropy of the material and uses orthotropic properties by applying homogenization techniques. Shear Stress analysis in that case does not show the actual real world scenario and rather approximates it. In this paper actual geometry and properties of reinforcement and matrix are modelled to capture the actual stress state during the testing of samples as per ASTM standards. Testing of samples is also performed in order to validate the results. Fibre volume fraction of yarn is determined by image analysis of manufactured samples. Fibre volume fraction data is incorporated into the numerical model for correction of transversely isotropic properties of yarn. A comparison between experimental and simulated results is presented.

Keywords: ILSS, FEA, micromechanical, fibre volume fraction, image analysis

Procedia PDF Downloads 347
1134 Handshake Algorithm for Minimum Spanning Tree Construction

Authors: Nassiri Khalid, El Hibaoui Abdelaaziz et Hajar Moha

Abstract:

In this paper, we introduce and analyse a probabilistic distributed algorithm for a construction of a minimum spanning tree on network. This algorithm is based on the handshake concept. Firstly, each network node is considered as a sub-spanning tree. And at each round of the execution of our algorithm, a sub-spanning trees are merged. The execution continues until all sub-spanning trees are merged into one. We analyze this algorithm by a stochastic process.

Keywords: Spanning tree, Distributed Algorithm, Handshake Algorithm, Matching, Probabilistic Analysis

Procedia PDF Downloads 638
1133 Experimental Characterization of the AA7075 Aluminum Alloy Using Hot Shear Tensile Test

Authors: Trunal Bhujangrao, Catherine Froustey, Fernando Veiga, Philippe Darnis, Franck Girot Mata

Abstract:

The understanding of the material behavior under shear loading has great importance for a researcher in manufacturing processes like cutting, machining, milling, turning, friction stir welding, etc. where the material experiences large deformation at high temperature. For such material behavior analysis, hot shear tests provide a useful means to investigate the evolution of the microstructure at a wide range of temperature and to improve the material behavior model. Shear tests can be performed by direct shear loading (e.g. torsion of thin-walled tubular samples), or appropriate specimen design to convert a tensile or compressive load into shear (e.g. simple shear tests). The simple shear tests are straightforward and designed to obtained very large deformation. However, many of these shear tests are concerned only with the elastic response of the material. It is becoming increasingly important to capture a plastic response of the material. Plastic deformation is significantly more complex and is known to depend more heavily on the strain rate, temperature, deformation, etc. Besides, there is not enough work is done on high-temperature shear loading, because of geometrical instability occurred during the plastic deformation. The aim of this study is to design a new shear tensile specimen geometry to convert the tensile load into dominant shear loading under plastic deformation. Design of the specimen geometry is based on FEM. The material used in this paper is AA7075 alloy, tested quasi statically under elevated temperature. Finally, the microstructural changes taking place during

Keywords: AA7075 alloy, dynamic recrystallization, edge effect, large strain, shear tensile test

Procedia PDF Downloads 126
1132 Stochastic Repair and Replacement with a Single Repair Channel

Authors: Mohammed A. Hajeeh

Abstract:

This paper examines the behavior of a system, which upon failure is either replaced with certain probability p or imperfectly repaired with probability q. The system is analyzed using Kolmogorov's forward equations method; the analytical expression for the steady state availability is derived as an indicator of the system’s performance. It is found that the analysis becomes more complex as the number of imperfect repairs increases. It is also observed that the availability increases as the number of states and replacement probability increases. Using such an approach in more complex configurations and in dynamic systems is cumbersome; therefore, it is advisable to resort to simulation or heuristics. In this paper, an example is provided for demonstration.

Keywords: repairable models, imperfect, availability, exponential distribution

Procedia PDF Downloads 265
1131 Design of a Portable Shielding System for a Newly Installed NaI(Tl) Detector

Authors: Mayesha Tahsin, A.S. Mollah

Abstract:

Recently, a 1.5x1.5 inch NaI(Tl) detector based gamma-ray spectroscopy system has been installed in the laboratory of the Nuclear Science and Engineering Department of the Military Institute of Science and Technology for radioactivity detection purposes. The newly installed NaI(Tl) detector has a circular lead shield of 22 mm width. An important consideration of any gamma-ray spectroscopy is the minimization of natural background radiation not originating from the radioactive sample that is being measured. Natural background gamma-ray radiation comes from naturally occurring or man-made radionuclides in the environment or from cosmic sources. Moreover, the main problem with this system is that it is not suitable for measurements of radioactivity with a large sample container like Petridish or Marinelli beaker geometry. When any laboratory installs a new detector or/and new shield, it “must” first carry out quality and performance tests for the detector and shield. This paper describes a new portable shielding system with lead that can reduce the background radiation. Intensity of gamma radiation after passing the shielding will be calculated using shielding equation I=Ioe-µx where Io is initial intensity of the gamma source, I is intensity after passing through the shield, µ is linear attenuation coefficient of the shielding material, and x is the thickness of the shielding material. The height and width of the shielding will be selected in order to accommodate the large sample container. The detector will be surrounded by a 4π-geometry low activity lead shield. An additional 1.5 mm thick shield of tin and 1 mm thick shield of copper covering the inner part of the lead shielding will be added in order to remove the presence of characteristic X-rays from the lead shield.

Keywords: shield, NaI (Tl) detector, gamma radiation, intensity, linear attenuation coefficient

Procedia PDF Downloads 134
1130 Magneto-Hydrodynamic Mixed Convection of Water-Al2O3 Nanofluid in a Wavy Lid-Driven Cavity

Authors: Farshid Fathinia

Abstract:

This paper examines numerically the laminar steady magneto-hydrodynamic mixed convection flow and heat transfer in a wavy lid-driven cavity filled with water-Al2O3 nanofluid using FDM method. The left and right sidewalls of the cavity have a wavy geometry and are maintained at a cold and hot temperature, respectively. The top and bottom walls are considered flat and insulated while, the bottom wall moves from left to right direction with a uniform lid-driven velocity. A magnetic field is applied vertically downward on the bottom wall of the cavity. Based on the numerical results, the effects of the dominant parameters such as Rayleigh number, Hartmann number, solid volume fraction, and wavy wall geometry parameters are examined. The numerical results are obtained for Hartmann number varying as 0 ≤ Ha ≤ 0.6, Rayleigh numbers varying as 103≤ Ra ≤105, and the solid volume fractions varying as 0 ≤ φ ≤ 0.0003. Comparisons with previously published numerical works on mixed convection in a nanofluid filled cavity are performed and good agreements between the results are observed. It is found that the flow circulation and mean Nusselt number decrease as the solid volume fraction and Hartmann number increase. Moreover, the convection enhances when the amplitude ratio of the wavy surface increases. The results also show that both the flow and thermal fields are significantly affected by the amplitude ratio (i.e., wave form) of the wavy wall.

Keywords: nanofluid, mixed convection, magnetic field, wavy cavity, lid-driven, SPH method

Procedia PDF Downloads 291
1129 Computer Aided Shoulder Prosthesis Design and Manufacturing

Authors: Didem Venus Yildiz, Murat Hocaoglu, Murat Dursun, Taner Akkan

Abstract:

The shoulder joint is a more complex structure than the hip or knee joints. In addition to the overall complexity of the shoulder joint, two different factors influence the insufficient outcome of shoulder replacement: the shoulder prosthesis design is far from fully developed and it is difficult to place these shoulder prosthesis due to shoulder anatomy. The glenohumeral joint is the most complex joint of the human shoulder. There are various treatments for shoulder failures such as total shoulder arthroplasty, reverse total shoulder arthroplasty. Due to its reverse design than normal shoulder anatomy, reverse total shoulder arthroplasty has different physiological and biomechanical properties. Post-operative achievement of this arthroplasty is depend on improved design of reverse total shoulder prosthesis. Designation achievement can be increased by several biomechanical and computational analysis. In this study, data of human both shoulders with right side fracture was collected by 3D Computer Tomography (CT) machine in dicom format. This data transferred to 3D medical image processing software (Mimics Materilise, Leuven, Belgium) to reconstruct patient’s left and right shoulders’ bones geometry. Provided 3D geometry model of the fractured shoulder was used to constitute of reverse total shoulder prosthesis by 3-matic software. Finite element (FE) analysis was conducted for comparison of intact shoulder and prosthetic shoulder in terms of stress distribution and displacements. Body weight physiological reaction force of 800 N loads was applied. Resultant values of FE analysis was compared for both shoulders. The analysis of the performance of the reverse shoulder prosthesis could enhance the knowledge of the prosthetic design.

Keywords: reverse shoulder prosthesis, biomechanics, finite element analysis, 3D printing

Procedia PDF Downloads 137
1128 Solutions to Probabilistic Constrained Optimal Control Problems Using Concentration Inequalities

Authors: Tomoaki Hashimoto

Abstract:

Recently, optimal control problems subject to probabilistic constraints have attracted much attention in many research field. Although probabilistic constraints are generally intractable in optimization problems, several methods haven been proposed to deal with probabilistic constraints. In most methods, probabilistic constraints are transformed to deterministic constraints that are tractable in optimization problems. This paper examines a method for transforming probabilistic constraints into deterministic constraints for a class of probabilistic constrained optimal control problems.

Keywords: optimal control, stochastic systems, discrete-time systems, probabilistic constraints

Procedia PDF Downloads 255
1127 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration

Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine

Abstract:

The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.

Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions

Procedia PDF Downloads 174
1126 Numerical Analysis for Soil Compaction and Plastic Points Extension in Pile Drivability

Authors: Omid Tavasoli, Mahmoud Ghazavi

Abstract:

A numerical analysis of drivability of piles in different geometry is presented. In this paper, a three-dimensional finite difference analysis for plastic point extension and soil compaction in the effect of pile driving is analyzed. Four pile configurations such as cylindrical pile, fully tapered pile, T-C pile consists of a top tapered segment and a lower cylindrical segment and C-T pile has a top cylindrical part followed by a tapered part are investigated. All piles which driven up to a total penetration depth of 16 m have the same length with equivalent surface area and approximately with identical material volumes. An idealization for pile-soil system in pile driving is considered for this approach. A linear elastic material is assumed to model the vertical pile behaviors and the soil obeys the elasto-plastic constitutive low and its failure is controlled by the Mohr-Coulomb failure criterion. A slip which occurred at the pile-soil contact surfaces along the shaft and the toe in pile driving procedures is simulated with interface elements. All initial and boundary conditions are the same in all analyses. Quiet boundaries are used to prevent wave reflection in the lateral and vertical directions for the soil. The results obtained from numerical analyses were compared with available other numerical data and laboratory tests, indicating a satisfactory agreement. It will be shown that with increasing the angle of taper, the permanent piles toe settlement increase and therefore, the extension of plastic points increase. These are interesting phenomena in pile driving and are on the safe side for driven piles.

Keywords: pile driving, finite difference method, non-uniform piles, pile geometry, pile set, plastic points, soil compaction

Procedia PDF Downloads 464
1125 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 266
1124 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements

Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck

Abstract:

This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.

Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow

Procedia PDF Downloads 114
1123 Fabrication of Uniform Nanofibers Using Gas Dynamic Virtual Nozzle Based Microfluidic Liquid Jet System

Authors: R. Vasireddi, J. Kruse, M. Vakili, M. Trebbin

Abstract:

Here we present a gas dynamic virtual nozzle (GDVN) based microfluidic jetting devices for spinning of nano/microfibers. The device is fabricated by soft lithography techniques and is based on the principle of a GDVN for precise three-dimensional gas focusing of the spinning solution. The nozzle device is used to produce micro/nanofibers of a perfluorinated terpolymer (THV), which were collected on an aluminum substrate for scanning electron microscopy (SEM) analysis. The influences of air pressure, polymer concentration, flow rate and nozzle geometry on the fiber properties were investigated. It was revealed that surface properties are controlled by air pressure and polymer concentration while the diameter and shape of the fibers are influenced mostly by the concentration of the polymer solution and pressure. Alterations of the nozzle geometry had a negligible effect on the fiber properties, however, the jetting stability was affected. Round and flat fibers with differing surface properties from craters, grooves to smooth surfaces could be fabricated by controlling the above-mentioned parameters. Furthermore, the formation of surface roughness was attributed to the fast evaporation rate and velocity (mis)match between the polymer solution jet and the surrounding air stream. The diameter of the fibers could be tuned from ~250 nm to ~15 µm. Because of the simplicity of the setup, the precise control of the fiber properties, access to biocompatible nanofiber fabrication and the easy scale-up of parallel channels for high throughput, this method offers significant benefits compared to existing solution-based fiber production methods.

Keywords: gas dynamic virtual nozzle (GDVN) principle, microfluidic device, spinning, uniform nanofibers

Procedia PDF Downloads 133
1122 Fatigue Behavior of Friction Stir Welded EN AW 5754 Aluminum Alloy Using Load Increase Procedure

Authors: A. B. Chehreh, M. Grätzel, M. Klein, J. P. Bergmann, F. Walther

Abstract:

Friction stir welding (FSW) is an advantageous method in the thermal joining processes, featuring the welding of various dissimilar and similar material combinations, joining temperatures below the melting point which prevents irregularities such as pores and hot cracks as well as high strengths mechanical joints near the base material. The FSW process consists of a rotating tool which is made of a shoulder and a probe. The welding process is based on a rotating tool which plunges in the workpiece under axial pressure. As a result, the material is plasticized by frictional heat which leads to a decrease in the flow stress. During the welding procedure, the material is continuously displaced by the tool, creating a firmly bonded weld seam behind the tool. However, the mechanical properties of the weld seam are affected by the design and geometry of the tool. These include in particular microstructural and surface properties which can favor crack initiation. Following investigation compares the dynamic properties of FSW weld seams with conventional and stationary shoulder geometry based on load increase test (LIT). Compared to classical Woehler tests, it is possible to determine the fatigue strength of the specimens after a short amount of time. The investigations were carried out on a robotized welding setup on 2 mm thick EN AW 5754 aluminum alloy sheets. It was shown that an increased tensile and fatigue strength can be achieved by using the stationary shoulder concept. Furthermore, it could be demonstrated that the LIT is a valid method to describe the fatigue behavior of FSW weld seams.

Keywords: aluminum alloy, fatigue performance, fracture, friction stir welding

Procedia PDF Downloads 137
1121 Effects of Watershed Erosion on Stream Channel Formation

Authors: Tiao Chang, Ivan Caballero, Hong Zhou

Abstract:

Streams carry water and sediment naturally by maintaining channel dimensions, pattern, and profile over time. Watershed erosion as a natural process has occurred to contribute sediment to streams over time. The formation of channel dimensions is complex. This study is to relate quantifiable and consistent channel dimensions at the bankfull stage to the corresponding watershed erosion estimation by the Revised Universal Soil Loss Equation (RUSLE). Twelve sites of which drainage areas range from 7 to 100 square miles in the Hocking River Basin of Ohio were selected for the bankfull geometry determinations including width, depth, cross-section area, bed slope, and drainage area. The twelve sub-watersheds were chosen to obtain a good overall representation of the Hocking River Basin. It is of interest to determine how these bankfull channel dimensions are related to the soil erosion of corresponding sub-watersheds. Soil erosion is a natural process that has occurred in a watershed over time. The RUSLE was applied to estimate erosions of the twelve selected sub-watersheds where the bankfull geometry measurements were conducted. These quantified erosions of sub-watersheds are used to investigate correlations with bankfull channel dimensions including discharge, channel width, channel depth, cross-sectional area, and pebble distribution. It is found that drainage area, bankfull discharge and cross-sectional area correlates strongly with watershed erosion well. Furthermore, bankfull width and depth are moderately correlated with watershed erosion while the particle size, D50, of channel bed sediment is not well correlated with watershed erosion.

Keywords: watershed, stream, sediment, channel

Procedia PDF Downloads 266
1120 Crop Leaf Area Index (LAI) Inversion and Scale Effect Analysis from Unmanned Aerial Vehicle (UAV)-Based Hyperspectral Data

Authors: Xiaohua Zhu, Lingling Ma, Yongguang Zhao

Abstract:

Leaf Area Index (LAI) is a key structural characteristic of crops and plays a significant role in precision agricultural management and farmland ecosystem modeling. However, LAI retrieved from different resolution data contain a scaling bias due to the spatial heterogeneity and model non-linearity, that is, there is scale effect during multi-scale LAI estimate. In this article, a typical farmland in semi-arid regions of Chinese Inner Mongolia is taken as the study area, based on the combination of PROSPECT model and SAIL model, a multiple dimensional Look-Up-Table (LUT) is generated for multiple crops LAI estimation from unmanned aerial vehicle (UAV) hyperspectral data. Based on Taylor expansion method and computational geometry model, a scale transfer model considering both difference between inter- and intra-class is constructed for scale effect analysis of LAI inversion over inhomogeneous surface. The results indicate that, (1) the LUT method based on classification and parameter sensitive analysis is useful for LAI retrieval of corn, potato, sunflower and melon on the typical farmland, with correlation coefficient R2 of 0.82 and root mean square error RMSE of 0.43m2/m-2. (2) The scale effect of LAI is becoming obvious with the decrease of image resolution, and maximum scale bias is more than 45%. (3) The scale effect of inter-classes is higher than that of intra-class, which can be corrected efficiently by the scale transfer model established based Taylor expansion and Computational geometry. After corrected, the maximum scale bias can be reduced to 1.2%.

Keywords: leaf area index (LAI), scale effect, UAV-based hyperspectral data, look-up-table (LUT), remote sensing

Procedia PDF Downloads 423