Search results for: Normal form theory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3938

Search results for: Normal form theory

3878 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration

Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine

Abstract:

The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.

Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 954
3877 Granulation using Clustering and Rough Set Theory and its Tree Representation

Authors: Girish Kumar Singh, Sonajharia Minz

Abstract:

Granular computing deals with representation of information in the form of some aggregates and related methods for transformation and analysis for problem solving. A granulation scheme based on clustering and Rough Set Theory is presented with focus on structured conceptualization of information has been presented in this paper. Experiments for the proposed method on four labeled data exhibit good result with reference to classification problem. The proposed granulation technique is semi-supervised imbibing global as well as local information granulation. To represent the results of the attribute oriented granulation a tree structure is proposed in this paper.

Keywords: Granular computing, clustering, Rough sets, datamining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719
3876 Instability Analysis of Laminated Composite Beams Subjected to Parametric Axial Load

Authors: Alireza Fereidooni, Kamran Behdinan, Zouheir Fawaz

Abstract:

The integral form of equations of motion of composite beams subjected to varying time loads are discretized using a developed finite element model. The model consists of a straight five node twenty-two degrees of freedom beam element. The stability analysis of the beams is studied by solving the matrix form characteristic equations of the system. The principle of virtual work and the first order shear deformation theory are employed to analyze the beams with large deformation and small strains. The regions of dynamic instability of the beam are determined by solving the obtained Mathieu form of differential equations. The effects of nonconservative loads, shear stiffness, and damping parameters on stability and response of the beams are examined. Several numerical calculations are presented to compare the results with data reported by other researchers.

Keywords: Finite element beam model, Composite Beams, stability analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219
3875 The Influence of Mobile Phone's Forms in the User Perception

Authors: The Jaya Suteja, Stephany Tedjohartoko

Abstract:

Not all types of mobile phone are successful in entering the market because some types of the mobile phone have a negative perception of user. Therefore, it is important to understand the influence of mobile phone's characteristics in the local user perception. This research investigates the influence of QWERTY mobile phone's forms in the perception of Indonesian user. First, some alternatives of mobile phone-s form are developed based on a certain number of mobile phone's models. At the second stage, some word pairs as design attributes of the mobile phone are chosen to represent the user perception of mobile phone. At the final stage, a survey is conducted to investigate the influence of the developed form alternatives to the user perception. Based on the research, users perceive mobile phone's form with curved top and straight bottom shapes and mobile phone's form with slider and antenna as the most negative form. Meanwhile, mobile phone's form with curved top and bottom shapes and mobile phone-s form without slider and antenna are perceived by the user as the most positive form.

Keywords: Influence, mobile phone, form, user perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370
3874 Principle Components Updates via Matrix Perturbations

Authors: Aiman Elragig, Hanan Dreiwi, Dung Ly, Idriss Elmabrook

Abstract:

This paper highlights a new approach to look at online principle components analysis (OPCA). Given a data matrix X R,^m x n we characterise the online updates of its covariance as a matrix perturbation problem. Up to the principle components, it turns out that online updates of the batch PCA can be captured by symmetric matrix perturbation of the batch covariance matrix. We have shown that as n→ n0 >> 1, the batch covariance and its update become almost similar. Finally, utilize our new setup of online updates to find a bound on the angle distance of the principle components of X and its update.

Keywords: Online data updates, covariance matrix, online principle component analysis (OPCA), matrix perturbation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1038
3873 PZ: A Z-based Formalism for Modeling Probabilistic Behavior

Authors: Hassan Haghighi

Abstract:

Probabilistic techniques in computer programs are becoming more and more widely used. Therefore, there is a big interest in the formal specification, verification, and development of probabilistic programs. In our work-in-progress project, we are attempting to make a constructive framework for developing probabilistic programs formally. The main contribution of this paper is to introduce an intermediate artifact of our work, a Z-based formalism called PZ, by which one can build set theoretical models of probabilistic programs. We propose to use a constructive set theory, called CZ set theory, to interpret the specifications written in PZ. Since CZ has an interpretation in Martin-L¨of-s theory of types, this idea enables us to derive probabilistic programs from correctness proofs of their PZ specifications.

Keywords: formal specification, formal program development, probabilistic programs, CZ set theory, type theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1203
3872 Value from Environmental and Cultural Perspectives or Two Sides of the Same Coin

Authors: Vilém Pařil, Dominika Tóthová

Abstract:

This paper discusses the value theory in cultural heritage and the value theory in environmental economics. Two economic views of the value theory are compared, within the field of cultural heritage maintenance and within the field of the environment. The main aims are to find common features in these two differently structured theories under the layer of differently defined terms as well as really differing features of these two approaches; to clear the confusion which stems from different terminology as in fact these terms capture the same aspects of reality; and to show possible inspiration these two perspectives can offer one another. Another aim is to present these two value systems in one value framework. First, important moments of the value theory from the economic perspective are presented, leading to the marginal revolution of (not only) the Austrian School. Then the theory of value within cultural heritage and environmental economics are explored. Finally, individual approaches are compared and their potential mutual inspiration searched for.

Keywords: Cultural heritage, environmental economics, existence value, value theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
3871 The Application of the Queuing Theory in the Traffic Flow of Intersection

Authors: Shuguo Yang, Xiaoyan Yang

Abstract:

It is practically significant to research the traffic flow of intersection because the capacity of intersection affects the efficiency of highway network directly. This paper analyzes the traffic conditions of an intersection in certain urban by the methods of queuing theory and statistical experiment, sets up a corresponding mathematical model and compares it with the actual values. The result shows that queuing theory is applied in the study of intersection traffic flow and it can provide references for the other similar designs.

Keywords: Intersection, Queuing theory, Statistical experiment, System metrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7542
3870 Learning Paradigms for Educating a New Generation of Computer Science Students

Authors: J. M. Breed, E. Taylor

Abstract:

In this paper challenges associated with a new generation of Computer Science students are examined. The mode of education in tertiary institutes has progressed slowly while the needs of students have changed rapidly in an increasingly technological world. The major learning paradigms and learning theories within these paradigms are studied to find a suitable strategy for educating modern students. These paradigms include Behaviourism, Constructivism, Humanism and Cogntivism. Social Learning theory and Elaboration theory are two theories that are further examined and a survey is done to determine how these strategies will be received by students. The results and findings are evaluated and indicate that students are fairly receptive to a method that incorporates both Social Learning theory and Elaboration theory, but that some aspects of all paradigms need to be implemented to create a balanced and effective strategy with technology as foundation.

Keywords: Computer Science, Education, Elaboration Theory, Learning Paradigms, Social Learning Theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2165
3869 I-Vague Normal Groups

Authors: Zelalem Teshome Wale

Abstract:

The notions of I-vague normal groups with membership and non-membership functions taking values in an involutary dually residuated lattice ordered semigroup are introduced which generalize the notions with truth values in a Boolean algebra as well as those usual vague sets whose membership and non-membership functions taking values in the unit interval [0, 1]. Various operations and properties are established.

Keywords: Involutary dually residuated lattice ordered semigroup, I-vague set, I-vague group and I-vague normal group.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547
3868 Power and Delay Optimized Graph Representation for Combinational Logic Circuits

Authors: Padmanabhan Balasubramanian, Karthik Anantha

Abstract:

Structural representation and technology mapping of a Boolean function is an important problem in the design of nonregenerative digital logic circuits (also called combinational logic circuits). Library aware function manipulation offers a solution to this problem. Compact multi-level representation of binary networks, based on simple circuit structures, such as AND-Inverter Graphs (AIG) [1] [5], NAND Graphs, OR-Inverter Graphs (OIG), AND-OR Graphs (AOG), AND-OR-Inverter Graphs (AOIG), AND-XORInverter Graphs, Reduced Boolean Circuits [8] does exist in literature. In this work, we discuss a novel and efficient graph realization for combinational logic circuits, represented using a NAND-NOR-Inverter Graph (NNIG), which is composed of only two-input NAND (NAND2), NOR (NOR2) and inverter (INV) cells. The networks are constructed on the basis of irredundant disjunctive and conjunctive normal forms, after factoring, comprising terms with minimum support. Construction of a NNIG for a non-regenerative function in normal form would be straightforward, whereas for the complementary phase, it would be developed by considering a virtual instance of the function. However, the choice of best NNIG for a given function would be based upon literal count, cell count and DAG node count of the implementation at the technology independent stage. In case of a tie, the final decision would be made after extracting the physical design parameters. We have considered AIG representation for reduced disjunctive normal form and the best of OIG/AOG/AOIG for the minimized conjunctive normal forms. This is necessitated due to the nature of certain functions, such as Achilles- heel functions. NNIGs are found to exhibit 3.97% lesser node count compared to AIGs and OIG/AOG/AOIGs; consume 23.74% and 10.79% lesser library cells than AIGs and OIG/AOG/AOIGs for the various samples considered. We compare the power efficiency and delay improvement achieved by optimal NNIGs over minimal AIGs and OIG/AOG/AOIGs for various case studies. In comparison with functionally equivalent, irredundant and compact AIGs, NNIGs report mean savings in power and delay of 43.71% and 25.85% respectively, after technology mapping with a 0.35 micron TSMC CMOS process. For a comparison with OIG/AOG/AOIGs, NNIGs demonstrate average savings in power and delay by 47.51% and 24.83%. With respect to device count needed for implementation with static CMOS logic style, NNIGs utilize 37.85% and 33.95% lesser transistors than their AIG and OIG/AOG/AOIG counterparts.

Keywords: AND-Inverter Graph, OR-Inverter Graph, DirectedAcyclic Graph, Low power design, Delay optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2052
3867 Multivariable Control of Smart Timoshenko Beam Structures Using POF Technique

Authors: T.C. Manjunath, B. Bandyopadhyay

Abstract:

Active Vibration Control (AVC) is an important problem in structures. One of the ways to tackle this problem is to make the structure smart, adaptive and self-controlling. The objective of active vibration control is to reduce the vibration of a system by automatic modification of the system-s structural response. This paper features the modeling and design of a Periodic Output Feedback (POF) control technique for the active vibration control of a flexible Timoshenko cantilever beam for a multivariable case with 2 inputs and 2 outputs by retaining the first 2 dominant vibratory modes using the smart structure concept. The entire structure is modeled in state space form using the concept of piezoelectric theory, Timoshenko beam theory, Finite Element Method (FEM) and the state space techniques. Simulations are performed in MATLAB. The effect of placing the sensor / actuator at 2 finite element locations along the length of the beam is observed. The open loop responses, closed loop responses and the tip displacements with and without the controller are obtained and the performance of the smart system is evaluated for active vibration control.

Keywords: Smart structure, Timoshenko theory, Euler-Bernoulli theory, Periodic output feedback control, Finite Element Method, State space model, Vibration control, Multivariable system, Linear Matrix Inequality

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2319
3866 Modeling of Normal and Atherosclerotic Blood Vessels using Finite Element Methods and Artificial Neural Networks

Authors: K. Kamalanand, S. Srinivasan

Abstract:

Analysis of blood vessel mechanics in normal and diseased conditions is essential for disease research, medical device design and treatment planning. In this work, 3D finite element models of normal vessel and atherosclerotic vessel with 50% plaque deposition were developed. The developed models were meshed using finite number of tetrahedral elements. The developed models were simulated using actual blood pressure signals. Based on the transient analysis performed on the developed models, the parameters such as total displacement, strain energy density and entropy per unit volume were obtained. Further, the obtained parameters were used to develop artificial neural network models for analyzing normal and atherosclerotic blood vessels. In this paper, the objectives of the study, methodology and significant observations are presented.

Keywords: Blood vessel, atherosclerosis, finite element model, artificial neural networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2308
3865 Comparative Study of Decision Trees and Rough Sets Theory as Knowledge ExtractionTools for Design and Control of Industrial Processes

Authors: Marcin Perzyk, Artur Soroczynski

Abstract:

General requirements for knowledge representation in the form of logic rules, applicable to design and control of industrial processes, are formulated. Characteristic behavior of decision trees (DTs) and rough sets theory (RST) in rules extraction from recorded data is discussed and illustrated with simple examples. The significance of the models- drawbacks was evaluated, using simulated and industrial data sets. It is concluded that performance of DTs may be considerably poorer in several important aspects, compared to RST, particularly when not only a characterization of a problem is required, but also detailed and precise rules are needed, according to actual, specific problems to be solved.

Keywords: Knowledge extraction, decision trees, rough setstheory, industrial processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
3864 Exploring the Nature and Meaning of Theory in the Field of Neuroeducation Studies

Authors: Ali Nouri

Abstract:

Neuroeducation is one of the most exciting research fields which is continually evolving. However, there is a need to develop its theoretical bases in connection to practice. The present paper is a starting attempt in this regard to provide a space from which to think about neuroeducational theory and invoke more investigation in this area. Accordingly, a comprehensive theory of neuroeducation could be defined as grouping or clustering of concepts and propositions that describe and explain the nature of human learning to provide valid interpretations and implications useful for educational practice in relation to philosophical aspects or values. Whereas it should be originated from the philosophical foundations of the field and explain its normative significance, it needs to be testable in terms of rigorous evidence to fundamentally advance contemporary educational policy and practice. There is thus pragmatically a need to include a course on neuroeducational theory into the curriculum of the field. In addition, there is a need to articulate and disseminate considerable discussion over the subject within professional journals and academic societies.

Keywords: Neuroeducation studies, neuroeducational theory, theory building, neuroeducation research.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447
3863 A Study on Behaviour of Normal Strength Concrete and High Strength Concrete Subjected to Elevated Temperatures

Authors: C. B. K.Rao, Rooban Kumar

Abstract:

Cement concrete is a complex mixture of different materials. Behaviour of concrete depends on its mix proportions and constituents when it is subjected to elevated temperatures. Principal effects due to elevated temperatures are loss in compressive strength, loss in weight or mass, change in colour and spall of concrete. The experimental results of normal concrete and high strength concrete subjected elevated temperatures at 200°C, 400°C, 600°C, and 800°C and different cooling regimes viz. air cooling, water quenching on different grade of concrete are reported in this paper.

Keywords: High strength concrete, Normal strength concrete, Elevated Temperature, Loss of mass.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3778
3862 Estimation of the Mean of the Selected Population

Authors: Kalu Ram Meena, Aditi Kar Gangopadhyay, Satrajit Mandal

Abstract:

Two normal populations with different means and same variance are considered, where the variance is known. The population with the smaller sample mean is selected. Various estimators are constructed for the mean of the selected normal population. Finally, they are compared with respect to the bias and MSE risks by the mehod of Monte-Carlo simulation and their performances are analysed with the help of graphs.

Keywords: Estimation after selection, Brewster-Zidek technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1405
3861 Suggestion of Ultrasonic System for Diagnosis of Functional Gastrointestinal Disorders: Finite Difference Analysis, Development and Clinical Trials

Authors: Won-Pil Park, Qyoun-Jung Lee, Dae-Gon Woo, Chang-Yong Ko, Eun-Geun Kim, Dohyung Lim, Yong-Heum Lee, Tae-Min Shin, Han-Sung Kim

Abstract:

The disaster from functional gastrointestinal disorders has detrimental impact on the quality of life of the effected population and imposes a tremendous social and economic burden. There are, however, rare diagnostic methods for the functional gastrointestinal disorders. Our research group identified recently that the gastrointestinal tract well in the patients with the functional gastrointestinal disorders becomes more rigid than healthy people when palpating the abdominal regions overlaying the gastrointestinal tract. Objective of current study is, therefore, identify feasibility of a diagnostic system for the functional gastrointestinal disorders based on ultrasound technique, which can quantify the characteristics above. Two-dimensional finite difference (FD) models (one normal and two rigid model) were developed to analyze the reflective characteristic (displacement) on each soft-tissue layer responded after application of ultrasound signals. The FD analysis was then based on elastic ultrasound theory. Validation of the model was performed via comparison of the characteristic of the ultrasonic responses predicted by FD analysis with that determined from the actual specimens for the normal and rigid conditions. Based on the results from FD analysis, ultrasound system for diagnosis of the functional gastrointestinal disorders was developed and clinically tested via application of it to 40 human subjects with/without functional gastrointestinal disorders who were assigned to Normal and Patient Groups. The FD models were favorably validated. The results from FD analysis showed that the maximum displacement amplitude in the rigid models (0.12 and 0.16) at the interface between the fat and muscle layers was explicitly less than that in the normal model (0.29). The results from actual specimens showed that the maximum amplitude of the ultrasonic reflective signal in the rigid models (0.2±0.1Vp-p) at the interface between the fat and muscle layers was explicitly higher than that in the normal model (0.1±0.2 Vp-p). Clinical tests using our customized ultrasound system showed that the maximum amplitudes of the ultrasonic reflective signals near to the gastrointestinal tract well for the patient group (2.6±0.3 Vp-p) were generally higher than those in normal group (0.1±0.2 Vp-p). Here, maximum reflective signals was appeared at 20mm depth approximately from abdominal skin for all human subjects, corresponding to the location of the boundary layer close to gastrointestinal tract well. These findings suggest that our customized ultrasound system using the ultrasonic reflective signal may be helpful to the diagnosis of the functional gastrointestinal disorders.

Keywords: Finite Difference (FD) Analysis, FunctionalGastrointestinal Disorders, Gastrointestinal Tract, UltrasonicResponses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
3860 Speed Characteristics of Mixed Traffic Flow on Urban Arterials

Authors: Ashish Dhamaniya, Satish Chandra

Abstract:

Speed and traffic volume data are collected on different sections of four lane and six lane roads in three metropolitan cities in India. Speed data are analyzed to fit the statistical distribution to individual vehicle speed data and all vehicles speed data. It is noted that speed data of individual vehicle generally follows a normal distribution but speed data of all vehicle combined at a section of urban road may or may not follow the normal distribution depending upon the composition of traffic stream. A new term Speed Spread Ratio (SSR) is introduced in this paper which is the ratio of difference in 85th and 50th percentile speed to the difference in 50th and 15th percentile speed. If SSR is unity then speed data are truly normally distributed. It is noted that on six lane urban roads, speed data follow a normal distribution only when SSR is in the range of 0.86 – 1.11. The range of SSR is validated on four lane roads also.

Keywords: Normal distribution, percentile speed, speed spread ratio, traffic volume.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4245
3859 Reducing Uncertainty of Monte Carlo Estimated Fatigue Damage in Offshore Wind Turbines Using FORM

Authors: Jan-Tore H. Horn, Jørgen Juncher Jensen

Abstract:

Uncertainties related to fatigue damage estimation of non-linear systems are highly dependent on the tail behaviour and extreme values of the stress range distribution. By using a combination of the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), the accuracy of the fatigue estimations may be improved for the same computational efforts. The method is applied to a bottom-fixed, monopile-supported large offshore wind turbine, which is a non-linear and dynamically sensitive system. Different curve fitting techniques to the fatigue damage distribution have been used depending on the sea-state dependent response characteristics, and the effect of a bi-linear S-N curve is discussed. Finally, analyses are performed on several environmental conditions to investigate the long-term applicability of this multistep method. Wave loads are calculated using state-of-the-art theory, while wind loads are applied with a simplified model based on rotor thrust coefficients.

Keywords: Fatigue damage, FORM, monopile, monte carlo simulation, reliability, wind turbine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1189
3858 Compressive Strength Development of Normal Concrete and Self-Consolidating Concrete Incorporated with GGBS

Authors: M. Nili, S. Tavasoli, A. R. Yazdandoost

Abstract:

In this paper, an experimental investigation on the effect of Isfahan Ground Granulate Blast Furnace Slag (GGBS) on the compressive strength development of self-consolidating concrete (SCC) and normal concrete (NC) was performed. For this purpose, Portland cement type I was replaced with GGBS in various Portions. For NC and SCC Mixes, 10*10*10 cubic cm specimens were tested in 7, 28 and 91 days. It must be stated that in this research water to cement ratio was 0.44, cement used in cubic meter was 418 Kg/m³ and Superplasticizer (SP) Type III used in SCC based on Poly-Carboxylic acid. The results of experiments have shown that increasing GGBS Percentages in both types of concrete reduce Compressive strength in early ages.

Keywords: Compressive strength, GGBS, normal concrete, self-consolidating concrete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1004
3857 Designing an Irregular Tensegrity as a Monumental Object

Authors: Buntara Sthenly Gan

Abstract:

A novel and versatile numerical technique to solve a self-stress equilibrium state is adopted herein as a form-finding procedure for an irregular tensegrity structure. The numerical form-finding scheme of a tensegrity structure uses only the connectivity matrix and prototype tension coefficient vector as the initial guess solution. Any information on the symmetrical geometry or other predefined initial structural conditions is not necessary to get the solution in the form-finding process. An eight-node initial condition example is presented to demonstrate the efficiency and robustness of the proposed method in the form-finding of an irregular tensegrity structure. Based on the conception from the form-finding of an eight-node irregular tensegrity structure, a monumental object is designed by considering the real world situation such as self-weight, wind and earthquake loadings.

Keywords: Tensegrity, Form-finding, Design, Irregular, Self-stress, Force density method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1705
3856 Numerical Investigation of Non Fourier Heat Conduction in a Semi-infinite Body due to a Moving Concentrated Heat Source Composed with Radiational Boundary Condition

Authors: M. Akbari, S. Sadodin

Abstract:

In this paper, the melting of a semi-infinite body as a result of a moving laser beam has been studied. Because the Fourier heat transfer equation at short times and large dimensions does not have sufficient accuracy; a non-Fourier form of heat transfer equation has been used. Due to the fact that the beam is moving in x direction, the temperature distribution and the melting pool shape are not asymmetric. As a result, the problem is a transient threedimensional problem. Therefore, thermophysical properties such as heat conductivity coefficient, density and heat capacity are functions of temperature and material states. The enthalpy technique, used for the solution of phase change problems, has been used in an explicit finite volume form for the hyperbolic heat transfer equation. This technique has been used to calculate the transient temperature distribution in the semi-infinite body and the growth rate of the melt pool. In order to validate the numerical results, comparisons were made with experimental data. Finally, the results of this paper were compared with similar problem that has used the Fourier theory. The comparison shows the influence of infinite speed of heat propagation in Fourier theory on the temperature distribution and the melt pool size.

Keywords: Non-Fourier, Enthalpy technique, Melt pool, Radiational boundary condition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
3855 Shear Strength Characteristics of Sand-Particulate Rubber Mixture

Authors: Firas Daghistani, Hossam Abuel Naga

Abstract:

Waste tyres is an ongoing global problem that has a negative effect on the environment. Waste tyres are discarded in stockpiles where they provide harm to the environment in many ways. Finding applications to these materials can help in reducing this global problem. One of these applications is recycling these waste materials and using them in geotechnical engineering. Recycled waste tyre particulates can be mixed with sand to form a lightweight material with varying shear strength characteristics. This research further investigates the inclusion of particulate rubber to sand and whether it can increase or decrease the shear strength characteristics of the mixture. For the experiment, a series of direct shear tests was performed on a poorly graded sand with a mean particle size of 0.32 mm mixed with recycled poorly graded particulate rubber with a mean particle size of 0.51 mm. The shear tests were performed on four normal stresses 30, 55, 105, 200 kPa at a shear rate of 1 mm/minute. Different percentages of particulate rubber content were used in the mixture i.e., 10%, 20%, 30% and 50% of sand dry weight at three density states namely loose, slight dense, and dense state. The size ratio of the mixture, which is the mean particle size of the particulate rubber divided by the mean particle size of the sand, was 1.59. The results identified multiple parameters that can influence the shear strength of the mixture. The parameters were: normal stress, particulate rubber content, mixture gradation, mixture size ratio, and the mixture’s density. The inclusion of particulate rubber to sand showed a decrease to the internal friction angle, and an increase to the apparent cohesion. Overall, the inclusion of particulate rubber did not have a significant influence on the shear strength of the mixture. For all the dense states at the low normal stresses 30, and 55 kPa, the inclusion of particulate rubber showed a slight increase in the shear strength where the peak was at 20-30% rubber content of the sand’s dry weight. On the other hand, at the high normal stresses 105, and 200 kPa, there was a slight decrease in the shear strength.

Keywords: Direct shear, granular material, sand-rubber mixture, shear strength, waste material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 360
3854 Legal Theories Underpinning Access to Justice for Victims of Sexual Violence in Refugee Camps in Africa

Authors: O. E. Eberechi, G. P. Stevens

Abstract:

Legal theory has been referred to as the explanation of why things do or do not happen. It also describes situations and why they ensue. It provides a normative framework by which things are regulated and a foundation for the establishment of legal mechanisms/institutions that can bring about a desired change in a society. Furthermore, it offers recommendations in resolving practical problems and describes what the law is, what the law ought to be and defines the legal landscape generally. Some legal theories provide a universal standard, e.g. human rights, while others are capable of organizing and streamlining the collective use, and, by extension, bring order to society. Legal theory is used to explain how the world works and how it does not work. This paper will argue for the application of the principles of legal theory in the achievement of access to justice for female victims of sexual violence in refugee camps in Africa through the analysis of legal theories underpinning the access to justice for these women. It is a known fact that female refugees in camps in Africa often experience some form of sexual violation. The perpetrators of these incidents may never be apprehended, prosecuted, convicted or sentenced. Where prosecution does occur, the perpetrators are either acquitted as a result of poor investigation, inept prosecution, a lack of evidence, or the case may be dismissed owing to tardiness on the part of the prosecutor, which accounts for the culture of impunity in refugee camps. In other words, victims do not have access to the justice that could ameliorate the plight of the victims. There is, thus, a need for a legal framework that will facilitate access to justice for these victims. This paper will start with an introduction, and be followed by the definition of legal theory, its functions and its application in law. Secondly, it will provide a brief explanation of the problems faced by female refugees who are victims of sexual violence in refugee camps in Africa. Thirdly, it will embark on an analysis of theories which will be a help to an understanding of the precarious situation of female refugees, why they are violated, the need for access to justice for these victims, and the principles of legal theory in its usefulness in resolving access to justice for these victims.

Keywords: Access to justice, underpinning legal theory, refugee, sexual violence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
3853 Study on Robot Trajectory Planning by Robot End-Effector Using Dual Curvature Theory of the Ruled Surface

Authors: Y. S. Oh, P. Abhishesh, B. S. Ryuh

Abstract:

This paper presents the method of trajectory planning by the robot end-effector which accounts for more accurate and smooth differential geometry of the ruled surface generated by tool line fixed with end-effector based on the methods of curvature theory of ruled surface and the dual curvature theory, and focuses on the underlying relation to unite them for enhancing the efficiency for trajectory planning. Robot motion can be represented as motion properties of the ruled surface generated by trajectory of the Tool Center Point (TCP). The linear and angular properties of the six degree-of-freedom motion of end-effector are computed using the explicit formulas and functions from curvature theory and dual curvature theory. This paper explains the complete dualization of ruled surface and shows that the linear and angular motion applied using the method of dual curvature theory is more accurate and less complex.

Keywords: Dual curvature theory, robot end effector, ruled surface, TCP, tool center point.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1350
3852 A Computer Model of Quantum Field Theory

Authors: Hans H. Diel

Abstract:

This paper describes a computer model of Quantum Field Theory (QFT), referred to in this paper as QTModel. After specifying the initial configuration for a QFT process (e.g. scattering) the model generates the possible applicable processes in terms of Feynman diagrams, the equations for the scattering matrix, and evaluates probability amplitudes for the scattering matrix and cross sections. The computations of probability amplitudes are performed numerically. The equations generated by QTModel are provided for demonstration purposes only. They are not directly used as the base for the computations of probability amplitudes. The computer model supports two modes for the computation of the probability amplitudes: (1) computation according to standard QFT, and (2) computation according to a proposed functional interpretation of quantum theory.

Keywords: Computational Modeling, Simulation of Quantum Theory, Quantum Field Theory, Quantum Electrodynamics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815
3851 A Refined Nonlocal Strain Gradient Theory for Assessing Scaling-Dependent Vibration Behavior of Microbeams

Authors: Xiaobai Li, Li Li, Yujin Hu, Weiming Deng, Zhe Ding

Abstract:

A size-dependent Euler–Bernoulli beam model, which accounts for nonlocal stress field, strain gradient field and higher order inertia force field, is derived based on the nonlocal strain gradient theory considering velocity gradient effect. The governing equations and boundary conditions are derived both in dimensional and dimensionless form by employed the Hamilton principle. The analytical solutions based on different continuum theories are compared. The effect of higher order inertia terms is extremely significant in high frequency range. It is found that there exists an asymptotic frequency for the proposed beam model, while for the nonlocal strain gradient theory the solutions diverge. The effect of strain gradient field in thickness direction is significant in low frequencies domain and it cannot be neglected when the material strain length scale parameter is considerable with beam thickness. The influence of each of three size effect parameters on the natural frequencies are investigated. The natural frequencies increase with the increasing material strain gradient length scale parameter or decreasing velocity gradient length scale parameter and nonlocal parameter.

Keywords: Euler-Bernoulli Beams, free vibration, higher order inertia, nonlocal strain gradient theory, velocity gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1005
3850 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: Convolutional neural networks, coffee bean, peaberry, sorting, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
3849 A New Brazilian Friction-Resistant Low Alloy High Strength Steel – A Life Testing Approach

Authors: D. I. De Souza, G. P. Azevedo, R. Rocha

Abstract:

In this paper we will develop a sequential life test approach applied to a modified low alloy-high strength steel part used in highway overpasses in Brazil.We will consider two possible underlying sampling distributions: the Normal and theInverse Weibull models. The minimum life will be considered equal to zero. We will use the two underlying models to analyze a fatigue life test situation, comparing the results obtained from both.Since a major chemical component of this low alloy-high strength steel part has been changed, there is little information available about the possible values that the parameters of the corresponding Normal and Inverse Weibull underlying sampling distributions could have. To estimate the shape and the scale parameters of these two sampling models we will use a maximum likelihood approach for censored failure data. We will also develop a truncation mechanism for the Inverse Weibull and Normal models. We will provide rules to truncate a sequential life testing situation making one of the two possible decisions at the moment of truncation; that is, accept or reject the null hypothesis H0. An example will develop the proposed truncated sequential life testing approach for the Inverse Weibull and Normal models.

Keywords: Sequential life testing, normal and inverse Weibull models, maximum likelihood approach, truncation mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429