Search results for: lattice discrete element method
18970 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation
Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen
Abstract:
Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling
Procedia PDF Downloads 8918969 A Generalization of the Secret Sharing Scheme Codes Over Certain Ring
Authors: Ibrahim Özbek, Erdoğan Mehmet Özkan
Abstract:
In this study, we generalize (k,n) threshold secret sharing scheme on the study Ozbek and Siap to the codes over the ring Fq+ αFq. In this way, it is mentioned that the method obtained in that article can also be used on codes over rings, and new advantages to be obtained. The method of securely sharing the key in cryptography, which Shamir first systematized and Massey carried over to codes, became usable for all error-correcting codes. The firewall of this scheme is based on the hardness of the syndrome decoding problem. Also, an open study area is left for those working for other rings and code classes. All codes that correct errors with this method have been the working area of this method.Keywords: secret sharing scheme, linear codes, algebra, finite rings
Procedia PDF Downloads 7518968 Optimization of Perfusion Distribution in Custom Vascular Stent-Grafts Through Patient-Specific CFD Models
Authors: Scott M. Black, Craig Maclean, Pauline Hall Barrientos, Konstantinos Ritos, Asimina Kazakidi
Abstract:
Aortic aneurysms and dissections are leading causes of death in cardiovascular disease. Both inevitably lead to hemodynamic instability without surgical intervention in the form of vascular stent-graft deployment. An accurate description of the aortic geometry and blood flow in patient-specific cases is vital for treatment planning and long-term success of such grafts, as they must generate physiological branch perfusion and in-stent hemodynamics. The aim of this study was to create patient-specific computational fluid dynamics (CFD) models through a multi-modality, multi-dimensional approach with boundary condition optimization to predict branch flow rates and in-stent hemodynamics in custom stent-graft configurations. Three-dimensional (3D) thoracoabdominal aortae were reconstructed from four-dimensional flow-magnetic resonance imaging (4D Flow-MRI) and computed tomography (CT) medical images. The former employed a novel approach to generate and enhance vessel lumen contrast via through-plane velocity at discrete, user defined cardiac time steps post-hoc. To produce patient-specific boundary conditions (BCs), the aortic geometry was reduced to a one-dimensional (1D) model. Thereafter, a zero-dimensional (0D) 3-Element Windkessel model (3EWM) was coupled to each terminal branch to represent the distal vasculature. In this coupled 0D-1D model, the 3EWM parameters were optimized to yield branch flow waveforms which are representative of the 4D Flow-MRI-derived in-vivo data. Thereafter, a 0D-3D CFD model was created, utilizing the optimized 3EWM BCs and a 4D Flow-MRI-obtained inlet velocity profile. A sensitivity analysis on the effects of stent-graft configuration and BC parameters was then undertaken using multiple stent-graft configurations and a range of distal vasculature conditions. 4D Flow-MRI granted unparalleled visualization of blood flow throughout the cardiac cycle in both the pre- and postsurgical states. Segmentation and reconstruction of healthy and stented regions from retrospective 4D Flow-MRI images also generated 3D models with geometries which were successfully validated against their CT-derived counterparts. 0D-1D coupling efficiently captured branch flow and pressure waveforms, while 0D-3D models also enabled 3D flow visualization and quantification of clinically relevant hemodynamic parameters for in-stent thrombosis and graft limb occlusion. It was apparent that changes in 3EWM BC parameters had a pronounced effect on perfusion distribution and near-wall hemodynamics. Results show that the 3EWM parameters could be iteratively changed to simulate a range of graft limb diameters and distal vasculature conditions for a given stent-graft to determine the optimal configuration prior to surgery. To conclude, this study outlined a methodology to aid in the prediction post-surgical branch perfusion and in-stent hemodynamics in patient specific cases for the implementation of custom stent-grafts.Keywords: 4D flow-MRI, computational fluid dynamics, vascular stent-grafts, windkessel
Procedia PDF Downloads 18118967 Frequency Response of Complex Systems with Localized Nonlinearities
Authors: E. Menga, S. Hernandez
Abstract:
Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber
Procedia PDF Downloads 26618966 Optimization of Monitoring Networks for Air Quality Management in Urban Hotspots
Authors: Vethathirri Ramanujam Srinivasan, S. M. Shiva Nagendra
Abstract:
Air quality management in urban areas is a serious concern in both developed and developing countries. In this regard, more number of air quality monitoring stations are planned to mitigate air pollution in urban areas. In India, Central Pollution Control Board has set up 574 air quality monitoring stations across the country and proposed to set up another 500 stations in the next few years. The number of monitoring stations for each city has been decided based on population data. The setting up of ambient air quality monitoring stations and their operation and maintenance are highly expensive. Therefore, there is a need to optimize monitoring networks for air quality management. The present paper discusses the various methods such as Indian Standards (IS) method, US EPA method and European Union (EU) method to arrive at the minimum number of air quality monitoring stations. In addition, optimization of rain-gauge method and Inverse Distance Weighted (IDW) method using Geographical Information System (GIS) are also explored in the present work for the design of air quality network in Chennai city. In summary, additionally 18 stations are required for Chennai city, and the potential monitoring locations with their corresponding land use patterns are ranked and identified from the 1km x 1km sized grids.Keywords: air quality monitoring network, inverse distance weighted method, population based method, spatial variation
Procedia PDF Downloads 18918965 Breaking Stress Criterion that Changes Everything We Know About Materials Failure
Authors: Ali Nour El Hajj
Abstract:
Background: The perennial deficiencies of the failure models in the materials field have profoundly and significantly impacted all associated technical fields that depend on accurate failure predictions. Many preeminent and well-known scientists from an earlier era of groundbreaking discoveries attempted to solve the issue of material failure. However, a thorough understanding of material failure has been frustratingly elusive. Objective: The heart of this study is the presentation of a methodology that identifies a newly derived one-parameter criterion as the only general failure theory for noncompressible, homogeneous, and isotropic materials subjected to multiaxial states of stress and various boundary conditions, providing the solution to this longstanding problem. This theory is the counterpart and companion piece to the theory of elasticity and is in a formalism that is suitable for broad application. Methods: Utilizing advanced finite-element analysis, the maximum internal breaking stress corresponding to the maximum applied external force is identified as a unified and universal material failure criterion for determining the structural capacity of any system, regardless of its geometry or architecture. Results: A comparison between the proposed criterion and methodology against design codes reveals that current provisions may underestimate the structural capacity by 2.17 times or overestimate the capacity by 2.096 times. It also shows that existing standards may underestimate the structural capacity by 1.4 times or overestimate the capacity by 2.49 times. Conclusion: The proposed failure criterion and methodology will pave the way for a new era in designing unconventional structural systems composed of unconventional materials.Keywords: failure criteria, strength theory, failure mechanics, materials mechanics, rock mechanics, concrete strength, finite-element analysis, mechanical engineering, aeronautical engineering, civil engineering
Procedia PDF Downloads 7818964 An Approach of Node Model TCnNet: Trellis Coded Nanonetworks on Graphene Composite Substrate
Authors: Diogo Ferreira Lima Filho, José Roberto Amazonas
Abstract:
Nanotechnology opens the door to new paradigms that introduces a variety of novel tools enabling a plethora of potential applications in the biomedical, industrial, environmental, and military fields. This work proposes an integrated node model by applying the same concepts of TCNet to networks of nanodevices where the nodes are cooperatively interconnected with a low-complexity Mealy Machine (MM) topology integrating in the same electronic system the modules necessary for independent operation in wireless sensor networks (WSNs), consisting of Rectennas (RF to DC power converters), Code Generators based on Finite State Machine (FSM) & Trellis Decoder and On-chip Transmit/Receive with autonomy in terms of energy sources applying the Energy Harvesting technique. This approach considers the use of a Graphene Composite Substrate (GCS) for the integrated electronic circuits meeting the following characteristics: mechanical flexibility, miniaturization, and optical transparency, besides being ecological. In addition, graphene consists of a layer of carbon atoms with the configuration of a honeycomb crystal lattice, which has attracted the attention of the scientific community due to its unique Electrical Characteristics.Keywords: composite substrate, energy harvesting, finite state machine, graphene, nanotechnology, rectennas, wireless sensor networks
Procedia PDF Downloads 10518963 A Teaching Method for Improving Sentence Fluency in Writing
Authors: Manssour Habbash, Srinivasa Rao Idapalapati
Abstract:
Although writing is a multifaceted task, teaching writing is a demanding task basically for two reasons: Grammar and Syntax. This article provides a method of teaching writing that was found to be effective in improving students’ academic writing composition skill. The article explains the concepts of ‘guided-discovery’ and ‘guided-construction’ upon which a method of teaching writing is grounded and developed. Providing a brief commentary on what the core could mean primarily, the article presents an exposition of understanding and identifying the core and building upon the core that can demonstrate the way a teacher can make use of the concepts in teaching for improving the writing skills of their students. The method is an adaptation of grammar translation method that has been improvised to suit to a student-centered classroom environment. An intervention of teaching writing through this method was tried out with positive outcomes in formal classroom research setup, and in view of the content’s quality that relates more to the classroom practices and also in consideration of its usefulness to the practicing teachers the process and the findings are presented in a narrative form along with the results in tabular form.Keywords: core of a text, guided construction, guided discovery, theme of a text
Procedia PDF Downloads 38118962 Level Set and Morphological Operation Techniques in Application of Dental Image Segmentation
Authors: Abdolvahab Ehsani Rad, Mohd Shafry Mohd Rahim, Alireza Norouzi
Abstract:
Medical image analysis is one of the great effects of computer image processing. There are several processes to analysis the medical images which the segmentation process is one of the challenging and most important step. In this paper the segmentation method proposed in order to segment the dental radiograph images. Thresholding method has been applied to simplify the images and to morphologically open binary image technique performed to eliminate the unnecessary regions on images. Furthermore, horizontal and vertical integral projection techniques used to extract the each individual tooth from radiograph images. Segmentation process has been done by applying the level set method on each extracted images. Nevertheless, the experiments results by 90% accuracy demonstrate that proposed method achieves high accuracy and promising result.Keywords: integral production, level set method, morphological operation, segmentation
Procedia PDF Downloads 31718961 A Reasoning Method of Cyber-Attack Attribution Based on Threat Intelligence
Authors: Li Qiang, Yang Ze-Ming, Liu Bao-Xu, Jiang Zheng-Wei
Abstract:
With the increasing complexity of cyberspace security, the cyber-attack attribution has become an important challenge of the security protection systems. The difficult points of cyber-attack attribution were forced on the problems of huge data handling and key data missing. According to this situation, this paper presented a reasoning method of cyber-attack attribution based on threat intelligence. The method utilizes the intrusion kill chain model and Bayesian network to build attack chain and evidence chain of cyber-attack on threat intelligence platform through data calculation, analysis and reasoning. Then, we used a number of cyber-attack events which we have observed and analyzed to test the reasoning method and demo system, the result of testing indicates that the reasoning method can provide certain help in cyber-attack attribution.Keywords: reasoning, Bayesian networks, cyber-attack attribution, Kill Chain, threat intelligence
Procedia PDF Downloads 45018960 Design and Modelling of Ge/GaAs Hetero-structure Bipolar Transistor
Authors: Samson Mil'shtein, Dhawal N. Asthana
Abstract:
The presented heterostructure n-p-n bipolar transistor is comprised of Ge/GaAs heterojunctions consisting of 0.15µm thick emitter and 0.65µm collector junctions. High diffusivity of carriers in GaAs base was major motivation of current design. We avoided grading of the base which is common in heterojunction bipolar transistors, in order to keep the electron diffusivity as high as possible. The electrons injected into the 0.25µm thick p-type GaAs base with not very high doping (1017cm-3). The designed HBT enables cut off frequency on the order of 150GHz. The Ge/GaAs heterojunctions presented in our paper have proved to work better than comparable HBTs having GaAs bases and emitter/collector junctions made, for example, of AlGaAs/GaAs or other III-V compound semiconductors. The difference in lattice constants between Ge and GaAs is less than 2%. Therefore, there is no need of transition layers between Ge emitter and GaAs base. Significant difference in energy gap of these two materials presents new scope for improving performance of the emitter. With the complete structure being modelled and simulated using TCAD SILVACO, the collector/ emitter offset voltage of the device has been limited to a reasonable value of 63 millivolts by the dint of low energy band gap value associated with Ge emitter. The efficiency of the emitter in our HBT is 86%. Use of Germanium in the emitter and collector regions presents new opportunities for integration of this vertical device structure into silicon substrate.Keywords: Germanium, Gallium Arsenide, heterojunction bipolar transistor, high cut-off frequency
Procedia PDF Downloads 42018959 Physical and Mechanical Behavior of Compressed Earth Blocks Stabilized with Ca(OH)2 on Sub-Humid Warm Weather
Authors: D. Castillo T., Luis F. Jimenez
Abstract:
The compressed earth blocks (CEBs) constitute an alternative as a constructive element for building homes in regions with high levels of poverty and marginalization. Such is the case of Southeastern Mexico, where the population, predominantly indigene, build their houses with feeble materials like wood and palm, vulnerable to extreme weather in the area, because they do not have the financial resources to acquire concrete blocks. There are several advantages that can provide BTCs compared to traditional vibro-compressed concrete blocks, such as the availability of materials, low manufacturing cost and reduced CO2 emissions to the atmosphere for not be subjected to a burning process. However, to improve its mechanical properties and resistance to adverse weather conditions in terms of humidity and temperature of the sub-humid climate zones, it requires the use of a chemical stabilizer; in this case we chose Ca(OH)2. The stabilization method Eades-Grim was employed, according to ASTM C977-03. This method measures the optimum amount of lime required to stabilize the soil, increasing the pH to 12.4 or higher. The minimum amount of lime required in this experiment was 1% and the maximum was 10%. The employed material was clay unconsolidated low to medium plasticity (CL type according to the Unified Soil Classification System). Based on these results, the CEBs manufacturing process was determined. The obtained blocks were from 10x15x30 cm using a mixture of soil, water and lime in different proportions. Later these blocks were put to dry outdoors and subjected to several physical and mechanical tests, such as compressive strength, absorption and drying shrinkage. The results were compared with the limits established by the Mexican Standard NMX-C-404-ONNCCE-2005 for the construction of housing walls. In this manner an alternative and sustainable material was obtained for the construction of rural households in the region, with better security conditions, comfort and cost.Keywords: calcium hydroxide, chemical stabilization, compressed earth blocks, sub-humid warm weather
Procedia PDF Downloads 40118958 Relationship between Trauma and Acute Scrotum: Test Torsion and Epididymal Appendix Torsion
Authors: Saimir Heta, Kastriot Haxhirexha, Virtut Velmishi, Nevila Alliu, Ilma Robo
Abstract:
Background: Testicular rotation can occur at any age. The possibility to save the testicle is the fastest possible surgical intervention which is indicated by the presence of acute pain even at rest. The time element is more important to diagnose and proceed further with surgical intervention. Testicular damage is a consequence which mainly depends on the moment of onset of symptoms, at the time when the symptoms are diagnosed, the earliest action to be performed is surgical intervention. Sometimes medical tests are needed to confirm a diagnosis, or to help identify another cause for symptoms; for example, the urine test, that is used to check for infection, associated with the scrotal ultrasound test. Control of blood flow to the longitudinal supply vessels of the testicles is indicated. The sign that indicates testicular rotation is a reduction in blood flow. This is the element which is distinguished from ultrasound examination. Surgery may be needed to determine if the patient’s symptoms are caused by the rotation of the testis or any other condition. Discussion: As a surgical intervention of the emergency, the torsion of the test depends very much on the duration of the torsion, as the success in the life of the testicle depends on the fastest surgical intervention. From the previous clinic, it is noted that in any case presented to the pediatric patient diagnosed with testicular rotation, there is always a link with personal history that the patient refers to the presence of a previous episode of testicular trauma. Literature supports this fact very logically. Conclusions: Salvation without testicular atrophy depends closely on establishing the diagnosis of testicular rotation as soon as possible. Following the logic above, it can be said that the diagnosis for rotation should be performed as soon as possible, to avoid consequences that will not be favorable for the patient.Keywords: acute scrotum, test torsion, newborns, clinical presentation
Procedia PDF Downloads 15018957 Phenomenological Ductile Fracture Criteria Applied to the Cutting Process
Authors: František Šebek, Petr Kubík, Jindřich Petruška, Jiří Hůlka
Abstract:
Present study is aimed on the cutting process of circular cross-section rods where the fracture is used to separate one rod into two pieces. Incorporating the phenomenological ductile fracture model into the explicit formulation of finite element method, the process can be analyzed without the necessity of realizing too many real experiments which could be expensive in case of repetitive testing in different conditions. In the present paper, the steel AISI 1045 was examined and the tensile tests of smooth and notched cylindrical bars were conducted together with biaxial testing of the notched tube specimens to calibrate material constants of selected phenomenological ductile fracture models. These were implemented into the Abaqus/Explicit through user subroutine VUMAT and used for cutting process simulation. As the calibration process is based on variables which cannot be obtained directly from experiments, numerical simulations of fracture tests are inevitable part of the calibration. Finally, experiments regarding the cutting process were carried out and predictive capability of selected fracture models is discussed. Concluding remarks then make the summary of gained experience both with the calibration and application of particular ductile fracture criteria.Keywords: ductile fracture, phenomenological criteria, cutting process, explicit formulation, AISI 1045 steel
Procedia PDF Downloads 45818956 The Performance Evaluation of the Modular Design of Hybrid Wall with Surface Heating and Cooling System
Authors: Selcen Nur Eri̇kci̇ Çeli̇k, Burcu İbaş Parlakyildiz, Gülay Zorer Gedi̇k
Abstract:
Reducing the use of mechanical heating and cooling systems in buildings, which accounts for approximately 30-40% of total energy consumption in the world has a major impact in terms of energy conservation. Formations of buildings that have sustainable and low energy utilization, structural elements with mechanical systems should be evaluated with a holistic approach. In point of reduction of building energy consumption ratio, wall elements that are vertical building elements and have an area broadly (m2) have proposed as a regulation with a different system. In the study, designing surface heating and cooling energy with a hybrid type of modular wall system and the integration of building elements will be evaluated. The design of wall element; - Identification of certain standards in terms of architectural design and size, -Elaboration according to the area where the wall elements (interior walls, exterior walls) -Solution of the joints, -Obtaining the surface in terms of building compatible with both conceptual structural put emphasis on upper stages, these elements will be formed. The durability of the product to the various forces, stability and resistance are so much substantial that are used the establishment of ready-wall element section and the planning of structural design. All created ready-wall alternatives will be paid attention at some parameters; such as adapting to performance-cost by optimum level and size that can be easily processed and reached. The restrictions such as the size of the zoning regulations, building function, structural system, wheelbase that are imposed by building laws, should be evaluated. The building aims to intend to function according to a certain standardization system and construction of wall elements will be used. The scope of performance criteria determined on the wall elements, utilization (operation, maintenance) and renovation phase, alternative material options will be evaluated with interim materials located in the contents. Design, implementation and technical combination of modular wall elements in the use phase and installation details together with the integration of energy saving, heat-saving and useful effects on the environmental aspects will be discussed in detail. As a result, the ready-wall product with surface heating and cooling modules will be created and defined as hybrid wall and will be compared with the conventional system in terms of thermal comfort. After preliminary architectural evaluations, certain decisions for all architectural design processes (pre and post design) such as the implementation and performance in use, maintenance, renewal will be evaluated in the results.Keywords: modular ready-wall element, hybrid, architectural design, thermal comfort, energy saving
Procedia PDF Downloads 25418955 Sensor Registration in Multi-Static Sonar Fusion Detection
Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin
Abstract:
In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem
Procedia PDF Downloads 16918954 Modal FDTD Method for Wave Propagation Modeling Customized for Parallel Computing
Authors: H. Samadiyeh, R. Khajavi
Abstract:
A new FD-based procedure, modal finite difference method (MFDM), is proposed for seismic wave propagation modeling, in which simulation is dealt with in the modal space. The method employs eigenvalues of a characteristic matrix formed by appropriate time-space FD stencils. Since MFD runs for different modes are totally independent of each other, MFDM can easily be parallelized while considerable simplicity in parallel-algorithm is also achieved. There is no requirement to any domain-decomposition procedure and inter-core data exchange. More important is the possibility to skip processing of less-significant modes, which enables one to adjust the procedure up to the level of accuracy needed. Thus, in addition to considerable ease of parallel programming, computation and storage costs are significantly reduced. The method is qualified for its efficiency by some numerical examples.Keywords: Finite Difference Method, Graphics Processing Unit (GPU), Message Passing Interface (MPI), Modal, Wave propagation
Procedia PDF Downloads 29618953 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs
Authors: M. De Filippo, J. S. Kuang
Abstract:
In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line
Procedia PDF Downloads 17818952 Computer Simulation Approach in the 3D Printing Operations of Surimi Paste
Authors: Timilehin Martins Oyinloye, Won Byong Yoon
Abstract:
Simulation technology is being adopted in many industries, with research focusing on the development of new ways in which technology becomes embedded within production, services, and society in general. 3D printing (3DP) technology is fast developing in the food industry. However, the limited processability of high-performance material restricts the robustness of the process in some cases. Significantly, the printability of materials becomes the foundation for extrusion-based 3DP, with residual stress being a major challenge in the printing of complex geometry. In many situations, the trial-a-error method is being used to determine the optimum printing condition, which results in time and resource wastage. In this report, the analysis of 3 moisture levels for surimi paste was investigated for an optimum 3DP material and printing conditions by probing its rheology, flow characteristics in the nozzle, and post-deposition process using the finite element method (FEM) model. Rheological tests revealed that surimi pastes with 82% moisture are suitable for 3DP. According to the FEM model, decreasing the nozzle diameter from 1.2 mm to 0.6 mm, increased the die swell from 9.8% to 14.1%. The die swell ratio increased due to an increase in the pressure gradient (1.15107 Pa to 7.80107 Pa) at the nozzle exit. The nozzle diameter influenced the fluid properties, i.e., the shear rate, velocity, and pressure in the flow field, as well as the residual stress and the deformation of the printed sample, according to FEM simulation. The post-printing stability of the model was investigated using the additive layer manufacturing (ALM) model. The ALM simulation revealed that the residual stress and total deformation of the sample were dependent on the nozzle diameter. A small nozzle diameter (0.6 mm) resulted in a greater total deformation (0.023), particularly at the top part of the model, which eventually resulted in the sample collapsing. As the nozzle diameter increased, the accuracy of the model improved until the optimum nozzle size (1.0 mm). Validation with 3D-printed surimi products confirmed that the nozzle diameter was a key parameter affecting the geometry accuracy of 3DP of surimi paste.Keywords: 3D printing, deformation analysis, die swell, numerical simulation, surimi paste
Procedia PDF Downloads 6818951 Radial Distortion Correction Based on the Concept of Verifying the Planarity of a Specimen
Authors: Shih-Heng Tung, Ming-Hsiang Shih, Wen-Pei Sung
Abstract:
Because of the rapid development of digital camera and computer, digital image correlation method has drawn lots of attention recently and has been applied to a variety of fields. However, the image distortion is inevitable when the image is captured through a lens. This image distortion problem can result in an innegligible error while using digital image correlation method. There are already many different ways to correct the image distortion, and most of them require specific image patterns or precise control points. A new distortion correction method is proposed in this study. The proposed method is based on the fact that a flat surface should keep flat when it is measured using three-dimensional (3D) digital image measurement technique. Lens distortion can be divided into radial distortion, decentering distortion and thin prism distortion. Because radial distortion has a more noticeable influence than the other types of distortions, this method deals only with radial distortion. The simplified 3D digital image measurement technique is adopted to measure the surface coordinates of a flat specimen. Then the gradient method is applied to find the best correction parameters. A few experiments are carried out in this study to verify the correctness of this method. The results show that this method can achieve a good accuracy and it is suitable for both large and small distortion conditions. The most important advantage is that it requires neither mark with specific pattern nor precise control points.Keywords: 3D DIC, radial distortion, distortion correction, planarity
Procedia PDF Downloads 55118950 A Method for Multimedia User Interface Design for Mobile Learning
Authors: Shimaa Nagro, Russell Campion
Abstract:
Mobile devices are becoming ever more widely available, with growing functionality, and are increasingly used as an enabling technology to give students access to educational material anytime and anywhere. However, the design of educational material user interfaces for mobile devices is beset by many unresolved research issues such as those arising from emphasising the information concepts then mapping this information to appropriate media (modelling information then mapping media effectively). This report describes a multimedia user interface design method for mobile learning. The method covers specification of user requirements and information architecture, media selection to represent the information content, design for directing attention to important information, and interaction design to enhance user engagement based on Human-Computer Interaction design strategies (HCI). The method will be evaluated by three different case studies to prove the method is suitable for application to different areas / applications, these are; an application to teach about major computer networking concepts, an application to deliver a history-based topic; (after these case studies have been completed, the method will be revised to remove deficiencies and then used to develop a third case study), an application to teach mathematical principles. At this point, the method will again be revised into its final format. A usability evaluation will be carried out to measure the usefulness and effectiveness of the method. The investigation will combine qualitative and quantitative methods, including interviews and questionnaires for data collection and three case studies for validating the MDMLM method. The researcher has successfully produced the method at this point which is now under validation and testing procedures. From this point forward in the report, the researcher will refer to the method using the MDMLM abbreviation which means Multimedia Design Mobile Learning Method.Keywords: human-computer interaction, interface design, mobile learning, education
Procedia PDF Downloads 24618949 Engineering Method to Measure the Impact Sound Improvement with Floor Coverings
Authors: Katarzyna Baruch, Agata Szelag, Jaroslaw Rubacha, Bartlomiej Chojnacki, Tadeusz Kamisinski
Abstract:
Methodology used to measure the reduction of transmitted impact sound by floor coverings situated on a massive floor is described in ISO 10140-3: 2010. To carry out such tests, the standardised reverberation room separated by a standard floor from the second measuring room are required. The need to have a special laboratory results in high cost and low accessibility of this measurement. The authors propose their own engineering method to measure the impact sound improvement with floor coverings. This method does not require standard rooms and floor. This paper describes the measurement procedure of proposed engineering method. Further, verification tests were performed. Validation of the proposed method was based on the analytical model, Statistical Energy Analysis (SEA) model and empirical measurements. The received results were related to corresponding ones obtained from ISO 10140-3:2010 measurements. The study confirmed the usefulness of the engineering method.Keywords: building acoustic, impact noise, impact sound insulation, impact sound transmission, reduction of impact sound
Procedia PDF Downloads 32418948 Using the Cluster Computing to Improve the Computational Speed of the Modular Exponentiation in RSA Cryptography System
Authors: Te-Jen Chang, Ping-Sheng Huang, Shan-Ten Cheng, Chih-Lin Lin, I-Hui Pan, Tsung- Hsien Lin
Abstract:
RSA system is a great contribution for the encryption and the decryption. It is based on the modular exponentiation. We call this system as “a large of numbers for calculation”. The operation of a large of numbers is a very heavy burden for CPU. For increasing the computational speed, in addition to improve these algorithms, such as the binary method, the sliding window method, the addition chain method, and so on, the cluster computer can be used to advance computational speed. The cluster system is composed of the computers which are installed the MPICH2 in laboratory. The parallel procedures of the modular exponentiation can be processed by combining the sliding window method with the addition chain method. It will significantly reduce the computational time of the modular exponentiation whose digits are more than 512 bits and even more than 1024 bits.Keywords: cluster system, modular exponentiation, sliding window, addition chain
Procedia PDF Downloads 52218947 An Approach for Determining and Reducing Vehicle Turnaround Time for Outbound Logistics by Using Critical Path Method
Authors: Prajakta M. Wazat, D. N. Raut
Abstract:
The study consists of a fast moving consumer goods (FMCG) beverage company wherein a portion of the supply chain which deals with outbound logistics is taken for improvement in order to reduce its logistics cost by using critical path method (CPM) method. Logistics is a major portion of the supply chain where customers are not willing to pay as it adds cost to product without adding value. In this study, it is necessary to ensure that products are delivered to clients at the right time while preserving high-quality standards from the beginning to the end of the supply chain. CPM is a logical sequencing method where in the most efficient route is achieved by arranging the series of events. CPM enables to identify a critical factor in order to minimize the delays and interruption by providing a feasible solution.Keywords: FMCG, supply chain, outbound logistics, vehicle turnaround time, critical path method, cost reduction
Procedia PDF Downloads 16418946 Degree of Bending in Axially Loaded Tubular KT-Joints of Offshore Structures: Parametric Study and Formulation
Authors: Hamid Ahmadi, Shadi Asoodeh
Abstract:
The fatigue life of tubular joints commonly found in offshore industry is not only dependent on the value of hot-spot stress (HSS), but is also significantly influenced by the through-the-thickness stress distribution characterized by the degree of bending (DoB). The determination of DoB values in a tubular joint is essential for improving the accuracy of fatigue life estimation using the stress-life (S–N) method and particularly for predicting the fatigue crack growth based on the fracture mechanics (FM) approach. In the present paper, data extracted from finite element (FE) analyses of tubular KT-joints, verified against experimental data and parametric equations, was used to investigate the effects of geometrical parameters on DoB values at the crown 0˚, saddle, and crown 180˚ positions along the weld toe of central brace in tubular KT-joints subjected to axial loading. Parametric study was followed by a set of nonlinear regression analyses to derive DoB parametric formulas for the fatigue analysis of KT-joints under axial loads. The tubular KT-joint is a quite common joint type found in steel offshore structures. However, despite the crucial role of the DoB in evaluating the fatigue performance of tubular joints, this paper is the first attempt to study and formulate the DoB values in KT-joints.Keywords: tubular KT-joint, fatigue, degree of bending (DoB), axial loading, parametric formula
Procedia PDF Downloads 36118945 A Computer-Aided System for Tooth Shade Matching
Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan
Abstract:
Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction
Procedia PDF Downloads 44418944 CFD Modeling of Boiling in a Microchannel Based On Phase-Field Method
Authors: Rahim Jafari, Tuba Okutucu-Özyurt
Abstract:
The hydrodynamics and heat transfer characteristics of a vaporized elongated bubble in a rectangular microchannel have been simulated based on Cahn-Hilliard phase-field method. In the simulations, the initially nucleated bubble starts growing as it comes in contact with superheated water. The growing shape of the bubble compared with the available experimental data in the literature.Keywords: microchannel, boiling, Cahn-Hilliard method, simulation
Procedia PDF Downloads 42418943 Assessing Significance of Correlation with Binomial Distribution
Authors: Vijay Kumar Singh, Pooja Kushwaha, Prabhat Ranjan, Krishna Kumar Ojha, Jitendra Kumar
Abstract:
Present day high-throughput genomic technologies, NGS/microarrays, are producing large volume of data that require improved analysis methods to make sense of the data. The correlation between genes and samples has been regularly used to gain insight into many biological phenomena including, but not limited to, co-expression/co-regulation, gene regulatory networks, clustering and pattern identification. However, presence of outliers and violation of assumptions underlying Pearson correlation is frequent and may distort the actual correlation between the genes and lead to spurious conclusions. Here, we report a method to measure the strength of association between genes. The method assumes that the expression values of a gene are Bernoulli random variables whose outcome depends on the sample being probed. The method considers the two genes as uncorrelated if the number of sample with same outcome for both the genes (Ns) is equal to certainly expected number (Es). The extent of correlation depends on how far Ns can deviate from the Es. The method does not assume normality for the parent population, fairly unaffected by the presence of outliers, can be applied to qualitative data and it uses the binomial distribution to assess the significance of association. At this stage, we would not claim about the superiority of the method over other existing correlation methods, but our method could be another way of calculating correlation in addition to existing methods. The method uses binomial distribution, which has not been used until yet, to assess the significance of association between two variables. We are evaluating the performance of our method on NGS/microarray data, which is noisy and pierce by the outliers, to see if our method can differentiate between spurious and actual correlation. While working with the method, it has not escaped our notice that the method could also be generalized to measure the association of more than two variables which has been proven difficult with the existing methods.Keywords: binomial distribution, correlation, microarray, outliers, transcriptome
Procedia PDF Downloads 41518942 Study of the Electromagnetic Resonances of a Cavity with an Aperture Using Numerical Method and Equivalent Circuit Method
Authors: Ming-Chu Yin, Ping-An Du
Abstract:
The shielding ability of a shielding cavity is affected greatly by its resonances, which include resonance modes and frequencies. The equivalent circuit method and numerical method of transmission line matrix (TLM) are used to analyze the effect of aperture-cavity coupling on electromagnetic resonances of a cavity with an aperture in this paper. Both theoretical and numerical results show that the resonance modes of a shielding cavity with an aperture can be considered as the combination of cavity and aperture inherent resonance modes with resonance frequencies shifting, and the reason of this shift is aperture-cavity coupling. Because aperture sizes are important parameters to aperture-cavity coupling, variation rules of electromagnetic resonances of a shielding cavity with its aperture sizes are given, which will be useful for the design of shielding cavities.Keywords: aperture-cavity coupling, equivalent circuit method, resonances, shielding equipment
Procedia PDF Downloads 44418941 Stability of Solutions of Semidiscrete Stochastic Systems
Authors: Ramazan Kadiev, Arkadi Ponossov
Abstract:
Semidiscrete systems contain both continuous and discrete components. This means that the dynamics is mostly continuous, but at certain instants, it is exposed to abrupt influences. Such systems naturally appear in applications, for example, in biological and ecological models as well as in the control theory. Therefore, the study of semidiscrete systems has recently attracted the attention of many specialists. Stochastic effects are an important part of any realistic approach to modeling. For example, stochasticity arises in the population dynamics, demographic and ecological due to a change in time of factors external to the system affecting the survival of the population. In control theory, random coefficients can simulate inaccuracies in measurements. It will be shown in the presentation how to incorporate such effects into semidiscrete systems. Stability analysis is an essential part of modeling real-world problems. In the presentation, it will be explained how sufficient conditions for the moment stability of solutions in terms of the coefficients for linear semidiscrete stochastic equations can be derived using non-Lyapunov technique.Keywords: abrupt changes, exponential stability, regularization, stochastic noises
Procedia PDF Downloads 187