Search results for: classical mechanics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1370

Search results for: classical mechanics

80 From Design, Experience and Play Framework to Common Design Thinking Tools: Using Serious Modern Board Games

Authors: Micael Sousa

Abstract:

Board games (BGs) are thriving as new designs emerge from the hobby community to greater audiences all around the world. Although digital games are gathering most of the attention in game studies and serious games research fields, the post-digital movement helps to explain why in the world dominated by digital technologies, the analog experiences are still unique and irreplaceable to users, allowing innovation in new hybrid environments. The BG’s new designs are part of these post-digital and hybrid movements because they result from the use of powerful digital tools that enable the production and knowledge sharing about the BGs and their face-to-face unique social experiences. These new BGs, defined as modern by many authors, are providing innovative designs and unique game mechanics that are still not yet fully explored by the main serious games (SG) approaches. Even the most established frameworks settled to address SG, as fun games implemented to achieve predefined goals need more development, especially when considering modern BGs. Despite the many anecdotic perceptions, researchers are only now starting to rediscover BGs and demonstrating their potentials. They are proving that BGs are easy to adapt and to grasp by non-expert players in experimental approaches, with the possibility of easy-going adaptation to players’ profiles and serious objectives even during gameplay. Although there are many design thinking (DT) models and practices, their relations with SG frameworks are also underdeveloped, mostly because this is a new research field, lacking theoretical development and the systematization of the experimental practices. Using BG as case studies promise to help develop these frameworks. Departing from the Design, Experience, and Play (DPE) framework and considering the Common Design Think Tools (CDST), this paper proposes a new experimental framework for the adaptation and development of modern BG design for DT: the Design, Experience, and Play for Think (DPET) experimental framework. This is done through the systematization of the DPE and CDST approaches applied in two case studies, where two different sequences of adapted BG were employed to establish a DT collaborative process. These two sessions occurred with different participants and in different contexts, also using different sequences of games for the same DT approach. The first session took place at the Faculty of Economics at the University of Coimbra in a training session of serious games for project development. The second session took place in the Casa do Impacto through The Great Village Design Jam light. Both sessions had the same duration and were designed to progressively achieve DT goals, using BGs as SGs in a collaborative process. The results from the sessions show that a sequence of BGs, when properly adapted to address the DPET framework, can generate a viable and innovative process of collaborative DT that is productive, fun, and engaging. The DPET proposed framework intents to help establish how new SG solutions could be defined for new goals through flexible DT. Applications in other areas of research and development can also benefit from these findings.

Keywords: board games, design thinking, methodology, serious games

Procedia PDF Downloads 105
79 Carbon Nanotubes Functionalization via Ullmann-Type Reactions Yielding C-C, C-O and C-N Bonds

Authors: Anna Kolanowska, Anna Kuziel, Sławomir Boncel

Abstract:

Carbon nanotubes (CNTs) represent a combination of lightness and nanoscopic size with high tensile strength, excellent thermal and electrical conductivity. By now, CNTs have been used as a support in heterogeneous catalysis (CuCl anchored to pre-functionalized CNTs) in the Ullmann-type coupling with aryl halides toward formation of C-N and C-O bonds. The results indicated that the stability of the catalyst was much improved and the elaborated catalytic system was efficient and recyclable. However, CNTs have not been considered as the substrate itself in the Ullmann-type reactions. But if successful, this functionalization would open new areas of CNT chemistry leading to enhanced in-solvent/matrix nanotube individualization. The copper-catalyzed Ullmann-type reaction is an attractive method for the formation of carbon-heteroatom and carbon-carbon bonds in organic synthesis. This condensation reaction is usually conducted at temperature as high as 200 oC, often in the presence of stoichiometric amounts of copper reagent and with activated aryl halides. However, a small amount of organic additive (e.g. diamines, amino acids, diols, 1,10-phenanthroline) can be applied in order to increase the solubility and stability of copper catalyst, and at the same time to allow performing the reaction under mild conditions. The copper (pre-)catalyst is prepared by in situ mixing of copper salt and the appropriate chelator. Our research is focused on the application of Ullmann-type reaction for the covalent functionalization of CNTs. Firstly, CNTs were chlorinated by using iodine trichloride (ICl3) in carbon tetrachloride (CCl4). This method involves formation of several chemical species (ICl, Cl2 and I2Cl6), but the most reactive is the dimer. The fact (that the dimer is the main individual in CCl4) is the reason for high reactivity and possibly high functionalization levels of CNTs. This method, indeed, yielded a notable amount of chlorine onto the MWCNT surface. The next step was the reaction of CNT-Cl with three substrates: aniline, iodobenzene and phenol for the formation C-N, C-C and C-O bonds, respectively, in the presence of 1,10-phenanthroline and cesium carbonate (Cs2CO3) as a base. As the CNT substrates, two multi-wall CNT (MWCNT) types were used: commercially available Nanocyl NC7000™ (9.6 nm diameter, 1.5 µm length, 90% purity) and thicker MWCNTs (in-house) synthesized in our laboratory using catalytic chemical vapour deposition (c-CVD). In-house CNTs had diameter ranging between 60-70 nm and length up to 300 µm. Since classical Ullmann reaction was found as suffering from poor yields, we have investigated the effect of various solvents (toluene, acetonitrile, dimethyl sulfoxide and N,N-dimethylformamide) on the coupling of substrates. Owing to the fact that the aryl halides show the reactivity order of I>Br>Cl>F, we have also investigated the effect of iodine presence on CNT surface on reaction yield. In this case, in first step we have used iodine monochloride instead of iodine trichloride. Finally, we have used the optimized reaction conditions with p-bromophenol and 1,2,4-trihydroxybenzene for the control of CNT dispersion.

Keywords: carbon nanotubes, coupling reaction, functionalization, Ullmann reaction

Procedia PDF Downloads 163
78 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing

Authors: Yohann R. J. Thomas, Sébastien Solan

Abstract:

Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.

Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes

Procedia PDF Downloads 246
77 Slope Stability Assessment in Metasedimentary Deposit of an Opencast Mine: The Case of the Dikuluwe-Mashamba (DIMA) Mine in the DR Congo

Authors: Dina Kon Mushid, Sage Ngoie, Tshimbalanga Madiba, Kabutakapua Kakanda

Abstract:

Slope stability assessment is still the biggest challenge in mining activities and civil engineering structures. The slope in an opencast mine frequently reaches multiple weak layers that lead to the instability of the pit. Faults and soft layers throughout the rock would increase weathering and erosion rates. Therefore, it is essential to investigate the stability of the complex strata to figure out how stable they are. In the Dikuluwe-Mashamba (DIMA) area, the lithology of the stratum is a set of metamorphic rocks whose parent rocks are sedimentary rocks with a low degree of metamorphism. Thus, due to the composition and metamorphism of the parent rock, the rock formation is different in hardness and softness, which means that when the content of dolomitic and siliceous is high, the rock is hard. It is softer when the content of argillaceous and sandy is high. Therefore, from the vertical direction, it appears as a weak and hard layer, and from the horizontal direction, it seems like a smooth and hard layer in the same rock layer. From the structural point of view, the main structures in the mining area are the Dikuluwe dipping syncline and the Mashamba dipping anticline, and the occurrence of rock formations varies greatly. During the folding process of the rock formation, the stress will concentrate on the soft layer, causing the weak layer to be broken. At the same time, the phenomenon of interlayer dislocation occurs. This article aimed to evaluate the stability of metasedimentary rocks of the Dikuluwe-Mashamba (DIMA) open-pit mine using limit equilibrium and stereographic methods Based on the presence of statistical structural planes, the stereographic projection was used to study the slope's stability and examine the discontinuity orientation data to identify failure zones along the mine. The results revealed that the slope angle is too steep, and it is easy to induce landslides. The numerical method's sensitivity analysis showed that the slope angle and groundwater significantly impact the slope safety factor. The increase in the groundwater level substantially reduces the stability of the slope. Among the factors affecting the variation in the rate of the safety factor, the bulk density of soil is greater than that of rock mass, the cohesion of soil mass is smaller than that of rock mass, and the friction angle in the rock mass is much larger than that in the soil mass. The analysis showed that the rock mass structure types are mostly scattered and fragmented; the stratum changes considerably, and the variation of rock and soil mechanics parameters is significant.

Keywords: slope stability, weak layer, safety factor, limit equilibrium method, stereography method

Procedia PDF Downloads 257
76 The Positive Effects of Top-Sharing: A Case Study

Authors: Maike Andresen, Georg Dochtmann

Abstract:

Due to political, social, and societal changes in labor organization, top-sharing, defined as job-sharing in leading positions, becomes more important in HRM. German companies are looking for practical and economically meaningful solutions that allow to enduringly increase women’s ratio in management, not only because of a recently implemented quota. Furthermore, supporting employees in achieving work-life balance is perceived as an important goal for a sustainable HRM to gain competitive advantage. Top-sharing is seen as being suitable to reach both goals. To evaluate determinants leading to effective top-sharing, a case study of a newly implemented top-sharing tandem in a large German enterprise was conducted over a period of 15 months. In this company, a full leadership position was split into two 60%-part-time positions held by an experienced female leader in her late career and a female college who took over her first leadership position (mid-career). We assumed a person-person fit in terms of a match of the top sharing partners’ personality profiles (Big Five) and their leadership motivations to be important prerequisites for an effective collaboration between them. We evaluated the person-person fit variables once before the tandem started to work. Both leaders were expected to learn from each other (mentoring, competency development). On an operational level, they were supposed to lead together the same employees in an effective manner (leader-member exchange), presupposing an effective cooperation between both (handing over information). To see developments over time, these processes were evaluated three times over the span of the project. Top-Sharing and the underlined processes are expected to positively influence the tandem’s performance which has been evaluated twice, at the beginning and the end of the project, to assess its development over time as well. The evaluation of the personality and the basic motives suggests that both executives can be a successful top-sharing tandem. The competency evaluations (supervisor as well as self-assessment) increased over the time span. Although the top sharing tandem worked on equal terms, they implemented rather classical than peer-mentoring due to different career ambitions of the tandem partners. Thus, opportunities were not used completely. Team-member exchange scores proved the good cooperation between the top-sharers. Although the employees did not evaluate the leader-member-exchange between them and the two leaders of the tandem homogeneously, the top-sharing tandem itself did not have the impression that the employees’ task performance depended on whom of the tandem was responsible for the task. Furthermore, top-sharing did not negatively influence the performance of both leaders. During qualitative interviews with the top-sharers and their team, we found that the top-sharers could focus more easily on their tasks. The results suggest positive outcomes of top-sharing (e.g. competency improvement, learning from each other through mentoring). Top-Sharing does not hamper performance. Thus, further research and practical implementations are suggested. As part-time jobs are still more often a female solution to increase their work-life- and work-family-balance, top-sharing may be a suitable solution to increase the woman’s ratio in leadership positions as well as to sustainable increase work-life-balance of executives.

Keywords: mentoring, part-time leadership, top-sharing, work-life-balance

Procedia PDF Downloads 264
75 Examination of Porcine Gastric Biomechanics in the Antrum Region

Authors: Sif J. Friis, Mette Poulsen, Torben Strom Hansen, Peter Herskind, Jens V. Nygaard

Abstract:

Gastric biomechanics governs a large range of scientific and engineering fields, from gastric health issues to interaction mechanisms between external devices and the tissue. Determination of mechanical properties of the stomach is, thus, crucial, both for understanding gastric pathologies as well as for the development of medical concepts and device designs. Although the field of gastric biomechanics is emerging, advances within medical devices interacting with the gastric tissue could greatly benefit from an increased understanding of tissue anisotropy and heterogeneity. Thus, in this study, uniaxial tensile tests of gastric tissue were executed in order to study biomechanical properties within the same individual as well as across individuals. With biomechanical tests in the strain domain, tissue from the antrum region of six porcine stomachs was tested using eight samples from each stomach (n = 48). The samples were cut so that they followed dominant fiber orientations. Accordingly, from each stomach, four samples were longitudinally oriented, and four samples were circumferentially oriented. A step-wise stress relaxation test with five incremental steps up to 25 % strain with 200 s rest periods for each step was performed, followed by a 25 % strain ramp test with three different strain rates. Theoretical analysis of the data provided stress-strain/time curves as well as 20 material parameters (e.g., stiffness coefficients, dissipative energy densities, and relaxation time coefficients) used for statistical comparisons between samples from the same stomach as well as in between stomachs. Results showed that, for the 20 material parameters, heterogeneity across individuals, when extracting samples from the same area, was in the same order of variation as the samples within the same stomach. For samples from the same stomach, the mean deviation percentage for all 20 parameters was 21 % and 18 % for longitudinal and circumferential orientations, compared to 25 % and 19 %, respectively, for samples across individuals. This observation was also supported by a nonparametric one-way ANOVA analysis, where results showed that the 20 material parameters from each of the six stomachs came from the same distribution with a level of statistical significance of P > 0.05. Direction-dependency was also examined, and it was found that the maximum stress for longitudinal samples was significantly higher than for circumferential samples. However, there were no significant differences in the 20 material parameters, with the exception of the equilibrium stiffness coefficient (P = 0.0039) and two other stiffness coefficients found from the relaxation tests (P = 0.0065, 0.0374). Nor did the stomach tissue show any significant differences between the three strain-rates used in the ramp test. Heterogeneity within the same region has not been examined earlier, yet, the importance of the sampling area has been demonstrated in this study. All material parameters found are essential to understand the passive mechanics of the stomach and may be used for mathematical and computational modeling. Additionally, an extension of the protocol used may be relevant for compiling a comparative study between the human stomach and the pig stomach.

Keywords: antrum region, gastric biomechanics, loading-unloading, stress relaxation, uniaxial tensile testing

Procedia PDF Downloads 426
74 Sensor Network Structural Integration for Shape Reconstruction of Morphing Trailing Edge

Authors: M. Ciminello, I. Dimino, S. Ameduri, A. Concilio

Abstract:

Improving aircraft's efficiency is one of the key elements of Aeronautics. Modern aircraft possess many advanced functions, such as good transportation capability, high Mach number, high flight altitude, and increasing rate of climb. However, no aircraft has a possibility to reach all of this optimized performance in a single airframe configuration. The aircraft aerodynamic efficiency varies considerably depending on the specific mission and on environmental conditions within which the aircraft must operate. Structures that morph their shape in response to their surroundings may at first seem like the stuff of science fiction, but take a look at nature and lots of examples of plants and animals that adapt to their environment would arise. In order to ensure both the controllable and the static robustness of such complex structural systems, a monitoring network is aimed at verifying the effectiveness of the given control commands together with the elastic response. In order to achieve this kind of information, the use of FBG sensors network is, in this project, proposed. The sensor network is able to measure morphing structures shape which may show large, global displacements due to non-standard architectures and materials adopted. Chord -wise variations may allow setting and chasing the best layout as a function of the particular and transforming reference state, always targeting best aerodynamic performance. The reason why an optical sensor solution has been selected is that while keeping a few of the contraindication of the classical systems (like cabling, continuous deployment, and so on), fibre optic sensors may lead to a dramatic reduction of the wires mass and weight thanks to an extreme multiplexing capability. Furthermore, the use of the ‘light’ as ‘information carrier’, permits dealing with nimbler, non-shielded wires, and avoids any kind of interference with the on-board instrumentation. The FBG-based transducers, herein presented, aim at monitoring the actual shape of adaptive trailing edge. Compared to conventional systems, these transducers allow more fail-safe measurements, by taking advantage of a supporting structure, hosting FBG, whose properties may be tailored depending on the architectural requirements and structural constraints, acting as strain modulator. The direct strain may, in fact, be difficult because of the large deformations occurring in morphing elements. A modulation transducer is then necessary to keep the measured strain inside the allowed range. In this application, chord-wise transducer device is a cantilevered beam sliding trough the spars and copying the camber line of the ATE ribs. FBG sensors array position are dimensioned and integrated along the path. A theoretical model describing the system behavior is implemented. To validate the design, experiments are then carried out with the purpose of estimating the functions between rib rotation and measured strain.

Keywords: fiber optic sensor, morphing structures, strain sensor, shape reconstruction

Procedia PDF Downloads 324
73 Stability in Slopes Related to Expansive Soils

Authors: Ivelise M. Strozberg, Lucas O. Vale, Maria V. V. Morais

Abstract:

Expansive soils are characterized by their significant volumetric variations, tending to suffer an increase of this volume when added water in their voids and a decrease of volume when this water is removed. The parameters of resistance (especially the angle of friction, cohesion and specific weight) of expansive or non-expansive soils of the same field present differences, as found in laboratory tests. What is expected is that, through this research, demonstrate that this variation directly affects the results of the calculation of factors of safety for slope stability. The expansibility due to specific clay minerals such as montmorillonites and vermiculites is the most common form of expansion of soils or rocks, causing expansion pressures. These pressures can become an aggravating problem in regions across the globe that, when not previously studied, may present high risks to the enterprise, such as cracks, fissures, movements in structures, breaking of retaining walls, drilling of wells, among others. The study provides results based on analyzes carried out in the Slide 2018 software belonging to the Rocsience group, where the software is a two-dimensional equilibrium slope stability program that calculates the factor of safety or probability of failure of certain surfaces composed of soils or rocks (or both, depending on the situation), - through the methods of: Bishop simplified, Fellenius and Janbu corrected. This research compares the factors of safety of a homogeneous earthfill dam geometry, analysed for operation and end-of-construction situations, having a height of approximately 35 meters, with a slope of 1.5: 1 in the slope downstream and 2: 1 on the upstream slope. As the water level is 32.73m high and the water table is drawn automatically by the Slide program using the finite element method for the operating situation, considering two hypotheses for the use of materials - the first with soils with characteristics of expansion and the second with soils without expansibility. For this purpose, soil samples were collected from the region of São Bento do Una - Pernambuco, Brazil and taken to the soil mechanics laboratory to characterize and determine the percentage of expansibility. There were found 2 types of soils in that area: 1 site of expansive soils (8%) and another with non- expansive ones. Based on the results found, the analysis of the values of factors of safety indicated, both upstream and downstream slopes, the highest values were obtained in the case where there is no presence of materials with expansibility resulting, for one of the situations, values of 1.353 (Fellenius), 1,295 (Janbu corrected) and 1,409 (Bishop simplified). There is a considerable drop in safety factors in cases where soils are potentially expansive, resulting in values for the same situation of 0.859 (Fellenius), 0.809 (Janbu corrected) and 0.842 (Bishop simplified), in the case of higher expansibility (8 %). This shows that the expansibility is a determinant factor in the fall of resistance of soil, determined by the factors of cohesion and angle of friction.

Keywords: dam. slope. software. swelling soil

Procedia PDF Downloads 116
72 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis

Authors: Mohamed Ali Abdennadher

Abstract:

Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.

Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology

Procedia PDF Downloads 21
71 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 73
70 The Role of Uterine Artery Embolization in the Management of Postpartum Hemorrhage

Authors: Chee Wai Ku, Pui See Chin

Abstract:

As an emerging alternative to hysterectomy, uterine artery embolization (UAE) has been widely used in the management of fibroids and in controlling postpartum hemorrhage (PPH) unresponsive to other therapies. Research has shown UAE to be a safe, minimally invasive procedure with few complications and minimal effects on future fertility. We present two cases highlighting the use of UAE in preventing PPH in a patient with a large fibroid at the time of cesarean section and in the treatment of secondary PPH refractory to other therapies in another patient. We present a 36-year primiparous woman who booked at 18+6 weeks gestation with a 13.7 cm subserosal fibroid at the lower anterior wall of the uterus near the cervix and a 10.8 cm subserosal fibroid in the left wall. Prophylactic internal iliac artery occlusion balloons were placed prior to the planned classical midline cesarean section. The balloons were inflated once the baby was delivered. Bilateral uterine arteries were embolized subsequently. The estimated blood loss (EBL) was 400 mls and hemoglobin (Hb) remained stable at 10 g/DL. Ultrasound scan 2 years postnatally showed stable uterine fibroids 10.4 and 7.1 cm, which was significantly smaller than before. We present the second case of a 40-year-old G2P1 with a previous cesarean section for failure to progress. There were no antenatal problems, and the placenta was not previa. She presented with term labour and underwent an emergency cesarean section for failed vaginal birth after cesarean. Intraoperatively extensive adhesions were noted with bladder drawn high, and EBL was 300 mls. Postpartum recovery was uneventful. She presented with secondary PPH 3 weeks later complicated by hypovolemic shock. She underwent an emergency examination under anesthesia and evacuation of the uterus, with EBL 2500mls. Histology showed decidua with chronic inflammation. She was discharged well with no further PPH. She subsequently returned one week later for secondary PPH. Bedside ultrasound showed that the endometrium was thin with no evidence of retained products of conception. Uterotonics were administered, and examination under anesthesia was performed, with uterine Bakri balloon and vaginal pack insertion after. EBL was 1000 mls. There was no definite cause of PPH with no uterine atony or products of conception. To evaluate a potential cause, pelvic angiogram and super selective left uterine arteriogram was performed which showed profuse contrast extravasation and acute bleeding from the left uterine artery. Superselective embolization of the left uterine artery was performed. No gross contrast extravasation from the right uterine artery was seen. These two cases demonstrated the superior efficacy of UAE. Firstly, the prophylactic use of intra-arterial balloon catheters in pregnant patients with large fibroids, and secondly, in the diagnosis and management of secondary PPH refractory to uterotonics and uterine tamponade. In both cases, the need for laparotomy hysterectomy was avoided, resulting in the preservation of future fertility. UAE should be a consideration for hemodynamically stable patients in centres with access to interventional radiology.

Keywords: fertility preservation, secondary postpartum hemorrhage, uterine embolization, uterine fibroids

Procedia PDF Downloads 182
69 Study of Elastic-Plastic Fatigue Crack in Functionally Graded Materials

Authors: Somnath Bhattacharya, Kamal Sharma, Vaibhav Sonkar

Abstract:

Composite materials emerged in the middle of the 20th century as a promising class of engineering materials providing new prospects for modern technology. Recently, a new class of composite materials known as functionally graded materials (FGMs) has drawn considerable attention of the scientific community. In general, FGMs are defined as composite materials in which the composition or microstructure or both are locally varied so that a certain variation of the local material properties is achieved. This gradual change in composition and microstructure of material is suitable to get gradient of properties and performances. FGMs are synthesized in such a way that they possess continuous spatial variations in volume fractions of their constituents to yield a predetermined composition. These variations lead to the formation of a non-homogeneous macrostructure with continuously varying mechanical and / or thermal properties in one or more than one direction. Lightweight functionally graded composites with high strength to weight and stiffness to weight ratios have been used successfully in aircraft industry and other engineering applications like in electronics industry and in thermal barrier coatings. In the present work, elastic-plastic crack growth problems (using Ramberg-Osgood Model) in an FGM plate under cyclic load has been explored by extended finite element method. Both edge and centre crack problems have been solved by taking additionally holes, inclusions and minor cracks under plane stress conditions. Both soft and hard inclusions have been implemented in the problems. The validity of linear elastic fracture mechanics theory is limited to the brittle materials. A rectangular plate of functionally graded material of length 100 mm and height 200 mm with 100% copper-nickel alloy on left side and 100% ceramic (alumina) on right side is considered in the problem. Exponential gradation in property is imparted in x-direction. A uniform traction of 100 MPa is applied to the top edge of the rectangular domain along y direction. In some problems, domain contains major crack along with minor cracks or / and holes or / and inclusions. Major crack is located the centre of the left edge or the centre of the domain. The discontinuities, such as minor cracks, holes, and inclusions are added either singly or in combination with each other. On the basis of this study, it is found that effect of minor crack in the domain’s failure crack length is minimum whereas soft inclusions have moderate effect and the effect of holes have maximum effect. It is observed that the crack growth is more before the failure in each case when hard inclusions are present in place of soft inclusions.

Keywords: elastic-plastic, fatigue crack, functionally graded materials, extended finite element method (XFEM)

Procedia PDF Downloads 386
68 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy

Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny

Abstract:

Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.

Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy

Procedia PDF Downloads 84
67 Psychometric Examination of Atma Jaya's Multiple Intelligence Batteries for University Students

Authors: Angela Oktavia Suryani, Bernadeth Gloria, Edwin Sutamto, Jessica Kristianty, Ni Made Rai Sapitri, Patricia Catherine Agla, Sitti Arlinda Rochiadi

Abstract:

It was found that some blogs or personal websites in Indonesia sell standardized intelligence tests (for example, Progressive Matrices (PM), Intelligence Structure Test (IST), and Culture Fair Intelligence Test (CFIT)) and other psychological tests, together with the manual and the key answers for public. Individuals can buy and prepare themselves for selection or recruitment with the real test. This action drives people to lie to the institution (education or company) and also to themselves. It was also found that those tests are old. Some items are not relevant with the current context, for example a question about a diameter of a certain coin that does not exist anymore. These problems motivate us to develop a new intelligence battery test, namely of Multiple Aptitude Battery (MAB). The battery test was built by using Thurstone’s Primary Mental Abilities theory and intended to be used by high schools students, university students, and worker applicants. The battery tests consist of 9 subtests. In the current study we examine six subtests, namely Reading Comprehension, Verbal Analogies, Numerical Inductive Reasoning, Numerical Deductive Reasoning, Mechanical Ability, and Two Dimensional Spatial Reasoning for university students. The study included 1424 data from students recruited by convenience sampling from eight faculties at Atma Jaya Catholic University of Indonesia. Classical and modern test approaches (Item Response Theory) were carried out to identify the item difficulties of the items and confirmatory factor analysis was applied to examine their internal validities. The validity of each subtest was inspected by using convergent–discriminant method, whereas the reliability was examined by implementing Kuder–Richardson formula. The result showed that the majority of the subtests were difficult in medium level, and there was only one subtest categorized as easy, namely Verbal Analogies. The items were found homogenous and valid measuring their constructs; however at the level of subtests, the construct validity examined by convergent-discriminant method indicated that the subtests were not unidimensional. It means they were not only measuring their own constructs but also other construct. Three of the subtests were able to predict academic performance with small effect size, namely Reading Comprehension, Numerical Inductive Reasoning, and Two Dimensional Spatial Reasoning. GPAs in intermediate level (GPAs at third semester and above) were considered as a factor for predictive invalidity. The Kuder-Richardson formula showed that the reliability coefficients for both numerical reasoning subtests and spatial reasoning were superior, in the range 0.84 – 0.87, whereas the reliability coefficient for the other three subtests were relatively below standard for ability test, in the range of 0.65 – 0.71. It can be concluded that some of the subtests are ready to be used, whereas some others are still need some revisions. This study also demonstrated that the convergent-discrimination method is useful to identify the general intelligence of human.

Keywords: intelligence, psychometric examination, multiple aptitude battery, university students

Procedia PDF Downloads 434
66 Burial Findings in Prehistory Qatar: Archaeological Perspective

Authors: Sherine El-Menshawy

Abstract:

Death, funerary beliefs and customs form an essential feature of belief systems and practices in many cultures. It is evident that during the pre-historical periods, various techniques of corpses burial and funerary rituals were conducted. Occasionally, corpses were merely buried in the sand, or in a grave where the body is placed in a contracted position- with knees drawn up under the chin and hands normally lying before the face- with mounds of sand, marking the grave or the bodies were burnt. However, common practice, that was demonstrable in the archaeological record, was burial. The earliest graves were very simple consisting of a shallow circular or oval pits in the ground. The current study focuses on the material culture at Qatar during the pre-historical period, specifically their funerary architecture and burial practices. Since information about burial customs and funerary practices in Qatar prehistory is both scarce and fragmentary, the importance of such study is to answer research questions related to funerary believes and burial habits during the early stages of civilization transformations at prehistory Qatar compared with Mesopotamia, since chronologically, the earliest pottery discovered in Qatar belongs to prehistoric Ubaid culture of Mesopotamia, that was collected from the excavations. This will lead to deep understanding of life and social status in pre-historical period at Qatar. The research also explores the relationship between pre-history Qatar funerary traditions and those of neighboring cultures in the Mesopotamia and Ancient Egypt, with the aim of ascertaining the distinctive aspects of pre-history Qatar culture, the reception of classical culture and the role it played in the creation of local cultural identities in the Near East. Methodologies of this study based on published books and articles in addition to unpublished reports of the Danish excavation team that excavated in and around Doha, Qatar archaeological sites from the 50th. The study is also constructed on compared material related to burial customs found in Mesopotamia. Therefore this current research: (i) Advances knowledge of the burial customs of the ancient people who inhabited Qatar, a study which is unknown recently to scholars, the study though will apply deep understanding of the history of ancient Qatar and its culture and values with an aim to share this invaluable human heritage. (ii) The study is of special significance for the field of studies, since evidence derived from the current study has great value for the study of living conditions, social structure, religious beliefs and ritual practices. (iii) Excavations brought to light burials of different categories. The graves date to the bronze and Iron ages. Their structure varies between mounds above the ground or burials below the ground level. Evidence comes from sites such as Al-Da’asa, Ras Abruk, and Al-Khor. Painted Ubaid sherds of Mesopotamian culture have been discovered in Qatar from sites such as Al-Da’asa, Ras Abruk, and Bir Zekrit. In conclusion, there is no comprehensive study which has been done and lack of general synthesis of information about funerary practices is problematic. Therefore, the study will fill in the gaps in the area.

Keywords: archaeological, burial, findings, prehistory, Qatar

Procedia PDF Downloads 147
65 Human Behavioral Assessment to Derive Land-Use for Sustenance of River in India

Authors: Juhi Sah

Abstract:

Habitat is characterized by the inter-dependency of environmental elements. Anthropocentric development approach is increasing our vulnerability towards natural hazards. Hence, manmade interventions should have a higher level of sensitivity towards the natural settings. Sensitivity towards the environment can be assessed by the behavior of the stakeholders involved. This led to the establishment of a hypothesis: there exists a legitimate relationship between the behavioral sciences, land use evolution and environment conservation, in the planning process. An attempt has been made to establish this relationship by reviewing the existing set of knowledge and case examples pertaining to the three disciplines under inquiry. Understanding the scarce & deteriorating nature of fresh-water reserves of earth and experimenting the above concept, a case study of a growing urban center's river flood plain is selected, in a developing economy, India. Cases of urban flooding in Chennai, Delhi and other mega cities of India, imposes a high risk on the unauthorized settlement, on the floodplains of the rivers. The issue addressed here is the encroachment of floodplains, through psychological enlightenment and modification through knowledge building. The reaction of an individual or society can be compared to a cognitive process. This study documents all the stakeholders' behavior and perception for their immediate natural environment (water body), and produce various land uses suitable along a river in an urban settlement as per different stakeholder's perceptions. To assess and induce morally responsible behavior in a community (small scale or large scale), tools of psychological inquiry is used for qualitative analysis. The analysis will deal with varied data sets from two sectors namely: River and its geology, Land use planning and regulation. Identification of a distinctive pattern in the built up growth, river ecology degradation, and human behavior, by handling large quantum of data from the diverse sector and comments on the availability of relevant data and its implications, has been done. Along the whole river stretch, condition and usage of its bank vary, hence stakeholder specific survey questionnaires have been prepared to accurately map the responses and habits of the rational inhabitants. A conceptual framework has been designed to move forward with the empirical analysis. The classical principle of virtues says "virtue of a human depends on its character" but another concept defines that the behavior or response is a derivative of situations and to bring about a behavioral change one needs to introduce a disruption in the situation/environment. Owing to the present trends, blindly following the results of data analytics and using it to construct policy, is not proving to be in favor of planned development and natural resource conservation. Thus behavioral assessment of the rational inhabitants of the planet is also required, as their activities and interests have a large impact on the earth's pre-set systems and its sustenance.

Keywords: behavioral assessment, flood plain encroachment, land use planning, river sustenance

Procedia PDF Downloads 116
64 Experimental and Numerical Investigations on the Vulnerability of Flying Structures to High-Energy Laser Irradiations

Authors: Vadim Allheily, Rudiger Schmitt, Lionel Merlat, Gildas L'Hostis

Abstract:

Inflight devices are nowadays major actors in both military and civilian landscapes. Among others, missiles, mortars, rockets or even drones this last decade are increasingly sophisticated, and it is today of prior manner to develop always more efficient defensive systems from all these potential threats. In this frame, recent High Energy Laser weapon prototypes (HEL) have demonstrated some extremely good operational abilities to shot down within seconds flying targets several kilometers off. Whereas test outcomes are promising from both experimental and cost-related perspectives, the deterioration process still needs to be explored to be able to closely predict the effects of a high-energy laser irradiation on typical structures, heading finally to an effective design of laser sources and protective countermeasures. Laser matter interaction researches have a long history of more than 40 years at the French-German Research Institute (ISL). Those studies were tied with laser sources development in the mid-60s, mainly for specific metrology of fast phenomena. Nowadays, laser matter interaction can be viewed as the terminal ballistics of conventional weapons, with the unique capability of laser beams to carry energy at light velocity over large ranges. In the last years, a strong focus was made at ISL on the interaction process of laser radiation with metal targets such as artillery shells. Due to the absorbed laser radiation and the resulting heating process, an encased explosive charge can be initiated resulting in deflagration or even detonation of the projectile in flight. Drones and Unmanned Air Vehicles (UAVs) are of outmost interests in modern warfare. Those aerial systems are usually made up of polymer-based composite materials, whose complexity involves new scientific challenges. Aside this main laser-matter interaction activity, a lot of experimental and numerical knowledge has been gathered at ISL within domains like spectrometry, thermodynamics or mechanics. Techniques and devices were developed to study separately each aspect concerned by this topic; optical characterization, thermal investigations, chemical reactions analysis or mechanical examinations are beyond carried out to neatly estimate essential key values. Results from these diverse tasks are then incorporated into analytic or FE numerical models that were elaborated, for example, to predict thermal repercussion on explosive charges or mechanical failures of structures. These simulations highlight the influence of each phenomenon during the laser irradiation and forecast experimental observations with good accuracy.

Keywords: composite materials, countermeasure, experimental work, high-energy laser, laser-matter interaction, modeling

Procedia PDF Downloads 256
63 A Density Function Theory Based Comparative Study of Trans and Cis - Resveratrol

Authors: Subhojyoti Chatterjee, Peter J. Mahon, Feng Wang

Abstract:

Resveratrol (RvL), a phenolic compound, is a key ingredient in wine and tomatoes that has been studied over the years because of its important bioactivities such as anti-oxidant, anti-aging and antimicrobial properties. Out of the two isomeric forms of resveratrol i.e. trans and cis, the health benefit is primarily associated with the trans form. Thus, studying the structural properties of the isomers will not only provide an insight into understanding the RvL isomers, but will also help in designing parameters for differentiation in order to achieve 99.9% purity of trans-RvL. In the present study, density function theory (DFT) study is conducted, using the B3LYP/6-311++G** model to explore the through bond and through space intramolecular interactions. Properties such as vibrational spectroscopy (IR and Raman), nuclear magnetic resonance (NMR) spectra, excess orbital energy spectrum (EOES), energy based decomposition analyses (EDA) and Fukui function are calculated. It is discovered that the structure of trans-RvL, although it is C1 non-planar, the backbone non-H atoms are nearly in the same plane; whereas the cis-RvL consists of two major planes of R1 and R2 that are not in the same plane. The absence of planarity gives rise to a H-bond of 2.67Å in cis-RvL. Rotation of the C(5)-C(8) single bond in trans-RvL produces higher energy barriers since it may break the (planar) entire conjugated structure; while such rotation in cis-RvL produces multiple minima and maxima depending on the positions of the rings. The calculated FT-IR spectrum shows very different spectral features for trans and cis-RvL in the region 900 – 1500 cm-1, where the spectral peaks at 1138-1158 cm-1 are split in cis-RvL compared to a single peak at 1165 cm-1 in trans-RvL. In the Raman spectra, there is significant enhancement of cis-RvL in the region above 3000cm-1. Further, the carbon chemical environment (13C NMR) of the RvL molecule exhibit a larger chemical shift for cis-RvL compared to trans-RvL (Δδ = 8.18 ppm) for the carbon atom C(11), indicating that the chemical environment of the C group in cis-RvL is more diverse than its other isomer. The energy gap between highest occupied molecular orbital (HOMO) and the lowest occupied molecular orbital (LUMO) is 3.95 eV for trans and 4.35 eV for cis-RvL. A more detailed inspection using the recently developed EOES revealed that most of the large energy differences i.e. Δεcis-trans > ±0.30 eV, in their orbitals are contributed from the outer valence shell. They are MO60 (HOMO), MO52-55 and MO46. The active sites that has been captured by Fukui function (f + > 0.08) are associated with the stilbene C=C bond of RvL and cis-RvL is more active at these sites than in trans-RvL, as cis orientation breaks the large conjugation of trans-RvL so that the hydroxyl oxygen’s are more active in cis-RvL. Finally, EDA highlights the interaction energy (ΔEInt) of the phenolic compound, where trans is preferred over the cis-RvL (ΔΔEi = -4.35 kcal.mol-1) isomer. Thus, these quantum mechanics results could help in unwinding the diversified beneficial activities associated with resveratrol.

Keywords: resveratrol, FT-IR, Raman, NMR, excess orbital energy spectrum, energy decomposition analysis, Fukui function

Procedia PDF Downloads 191
62 [Keynote Talk]: Surveillance of Food Safety Compliance of Hong Kong Street Food

Authors: Mabel Y. C. Yau, Roy C. F. Lai, Hugo Y. H. Or

Abstract:

This study is a pilot surveillance of hygiene compliance and food microbial safety of both licensed and mobile vendors selling Chinese ready–to-eat snack foods in Hong Kong. The study reflects similar situations in running mobile food vending business on trucks. Hong Kong is about to launch the Food Truck Pilot Scheme by the end of 2016 or early 2017. Technically, selling food on the vehicle is no different from hawking food on the street or vending food on the street. Each type of business bears similar food safety issues and cast the same impact on public health. Present findings demonstrate exemplarily situations that also apply to food trucks. 9 types of Cantonese style snacks of 32 samples in total were selected for microbial screening. A total of 16 vending sites including supermarkets, street markets, and snack stores were visited. The study finally focused on a traditional snack, the steamed rice cake with red beans called Put Chai Ko (PCK). PCK is a type of classical Cantonese pastry sold on push carts on the street. It used to be sold at room temperature and served with bamboo sticks in the old days. Some shops would have them sold steam fresh. Microbial examinations on aerobic counts, yeast, and mould, coliform, salmonella as well as Staphylococcus aureus detections were carried out. Salmonella was not detected in all samples. Since PCK does not contain ingredients of beef, poultry, eggs or dairy products, the risk of the presence of Salmonella in PCK was relatively lower although other source of contamination might be possible. Coagulase positive Staphylococcus aureus was found in 6 of the 14 samples sold at room temperature. Among these 6 samples, 3 were PCK. One of the samples was in an unacceptable range of total colony forming units higher than 105. The rest were only satisfactory. Observational evaluations were made with checklists on personal hygiene, premises hygiene, food safety control, food storage, cleaning and sanitization as well as waste disposals. The maximum score was 25 if total compliance were obtained. The highest score among vendors was 20. Three stores were below average, and two of these stores were selling PCK. Most of the non-compliances were on food processing facilities, sanitization conditions and waste disposal. In conclusion, although no food poisoning outbreaks happened during the time of the investigation, the risk of food hazard existed in these stores, especially among street vendors. Attention is needed in the traditional practice of food selling, and that food handlers might not have sufficient knowledge to properly handle food products. Variations in food qualities existed among supply chains or franchise eateries or shops. It was commonly observed that packaging and storage conditions are not properly enforced in the retails. The same situation could be reflected across the food business. It did indicate need of food safety training in the industry and loopholes in quality control among business.

Keywords: cantonese snacks, food safety, microbial, hygiene, street food

Procedia PDF Downloads 298
61 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 129
60 Concepts of Modern Design: A Study of Art and Architecture Synergies in Early 20ᵗʰ Century Europe

Authors: Stanley Russell

Abstract:

Until the end of the 19th century, European painting dealt almost exclusively with the realistic representation of objects and landscapes, as can be seen in the work of realist artists like Gustav Courbet. Architects of the day typically made reference to and recreated historical precedents in their designs. The curriculum of the first architecture school in Europe, The Ecole des Beaux Artes, based on the study of classical buildings, had a profound effect on the profession. Painting exhibited an increasing level of abstraction from the late 19th century, with impressionism, and the trend continued into the early 20th century when Cubism had an explosive effect sending shock waves through the art world that also extended into the realm of architectural design. Architect /painter Le Corbusier with “Purism” was one of the first to integrate abstract painting and building design theory in works that were equally shocking to the architecture world. The interrelationship of the arts, including architecture, was institutionalized in the Bauhaus curriculum that sought to find commonality between diverse art disciplines. Renowned painter and Bauhaus instructor Vassily Kandinsky was one of the first artists to make a semi-scientific analysis of the elements in “non-objective” painting while also drawing parallels between painting and architecture in his book Point and Line to plane. Russian constructivists made abstract compositions with simple geometric forms, and like the De Stijl group of the Netherlands, they also experimented with full-scale constructions and spatial explorations. Based on the study of historical accounts and original artworks, of Impressionism, Cubism, the Bauhaus, De Stijl, and Russian Constructivism, this paper begins with a thorough explanation of the art theory and several key works from these important art movements of the late 19th and early 20th century. Similarly, based on written histories and first-hand experience of built and drawn works, the author continues with an analysis of the theories and architectural works generated by the same groups, all of which actively pursued continuity between their art and architectural concepts. With images of specific works, the author shows how the trend toward abstraction and geometric purity in painting coincided with a similar trend in architecture that favored simple unornamented geometries. Using examples like the Villa Savoye, The Schroeder House, the Dessau Bauhaus, and unbuilt designs by Russian architect Chernikov, the author gives detailed examples of how the intersection of trends in Art and Architecture led to a unique and fruitful period of creative synergy when the same concepts that were used by artists to generate paintings were also used by architects in the making of objects, space, and buildings. In Conclusion, this article examines the extremely pivotal period in art and architecture history from the late 19th to early 20th century when the confluence of art and architectural theory led to many painted, drawn, and built works that continue to inspire architects and artists to this day.

Keywords: modern art, architecture, design methodologies, modern architecture

Procedia PDF Downloads 121
59 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 33
58 Detection and Quantification of Viable but Not Culturable Vibrio Parahaemolyticus in Frozen Bivalve Molluscs

Authors: Eleonora Di Salvo, Antonio Panebianco, Graziella Ziino

Abstract:

Background: Vibrio parahaemolyticus is a human pathogen that is widely distributed in marine environments. It is frequently isolated from raw seafood, particularly shellfish. Consumption of raw or undercooked seafood contaminated with V. parahaemolyticus may lead to acute gastroenteritis. Vibrio spp. has excellent resistance to low temperatures so it can be found in frozen products for a long time. Recently, the viable but non-culturable state (VBNC) of bacteria has attracted great attention, and more than 85 species of bacteria have been demonstrated to be capable of entering this state. VBNC cells cannot grow in conventional culture medium but are viable and maintain metabolic activity, which may constitute an unrecognized source of food contamination and infection. Also V. parahaemolyticus could exist in VBNC state under nutrient starvation or low-temperature conditions. Aim: The aim of the present study was to optimize methods and investigate V. parahaemolyticus VBNC cells and their presence in frozen bivalve molluscs, regularly marketed. Materials and Methods: propidium monoazide (PMA) was integrated with real-time polymerase chain reaction (qPCR) targeting the tl gene to detect and quantify V. parahaemolyticus in the VBNC state. PMA-qPCR resulted highly specific to V. parahaemolyticus with a limit of detection (LOD) of 10-1 log CFU/mL in pure bacterial culture. A standard curve for V. parahaemolyticus cell concentrations was established with the correlation coefficient of 0.9999 at the linear range of 1.0 to 8.0 log CFU/mL. A total of 77 samples of frozen bivalve molluscs (35 mussels; 42 clams) were subsequently subjected to the qualitative (on alkaline phosphate buffer solution) and quantitative research of V. parahaemolyticus on thiosulfate-citrate-bile salts-sucrose (TCBS) agar (DIFCO) NaCl 2.5%, and incubation at 30°C for 24-48 hours. Real-time PCR was conducted on homogenate samples, in duplicate, with and without propidium monoazide (PMA) dye, and exposed for 45 min under halogen lights (650 W). Total DNA was extracted from cell suspension in homogenate samples according to bolliture protocol. The Real-time PCR was conducted with species-specific primers for V. parahaemolitycus. The RT-PCR was performed in a final volume of 20 µL, containing 10 µL of SYBR Green Mixture (Applied Biosystems), 2 µL of template DNA, 2 µL of each primer (final concentration 0.6 mM), and H2O 4 µL. The qPCR was carried out on CFX96 TouchTM (Bio-Rad, USA). Results: All samples were negative both to the quantitative and qualitative detection of V. parahaemolyticus by the classical culturing technique. The PMA-qPCR let us individuating VBNC V. parahaemolyticus in the 20,78% of the samples evaluated with a value between the Log 10-1 and Log 10-3 CFU/g. Only clams samples were positive for PMA-qPCR detection. Conclusion: The present research is the first evaluating PMA-qPCR assay for detection of VBNC V. parahaemolyticus in bivalve molluscs samples, and the used method was applicable to the rapid control of marketed bivalve molluscs. We strongly recommend to use of PMA-qPCR in order to identify VBNC forms, undetectable by the classic microbiological methods. A precise knowledge of the V.parahaemolyticus in a VBNC form is fundamental for the correct risk assessment not only in bivalve molluscs but also in other seafood.

Keywords: food safety, frozen bivalve molluscs, PMA dye, Real-time PCR, VBNC state, Vibrio parahaemolyticus

Procedia PDF Downloads 135
57 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms

Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee

Abstract:

Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.

Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences

Procedia PDF Downloads 267
56 Scenario-Based Scales and Situational Judgment Tasks to Measure the Social and Emotional Skills

Authors: Alena Kulikova, Leonid Parmaksiz, Ekaterina Orel

Abstract:

Social and emotional skills are considered by modern researchers as predictors of a person's success both in specific areas of activity and in the life of a person as a whole. The popularity of this scientific direction ensures the emergence of a large number of practices aimed at developing and evaluating socio-emotional skills. Assessment of social and emotional development is carried out at the national level, as well as at the level of individual regions and institutions. Despite the fact that many of the already existing social and emotional skills assessment tools are quite convenient and reliable, there are now more and more new technologies and task formats which improve the basic characteristics of the tools. Thus, the goal of the current study is to develop a tool for assessing social and emotional skills such as emotion recognition, emotion regulation, empathy and a culture of self-care. To develop a tool assessing social and emotional skills, Rasch-Gutman scenario-based approach was used. This approach has shown its reliability and merit for measuring various complex constructs: parental involvement; teacher practices that support cultural diversity and equity; willingness to participate in the life of the community after psychiatric rehabilitation; educational motivation and others. To assess emotion recognition, we used a situational judgment task based on OCC (Ortony, Clore, and Collins) emotions theory. The main advantage of these two approaches compare to classical Likert scales is that it reduces social desirability in answers. A field test to check the psychometric properties of the developed instrument was conducted. The instrument was developed for the presidential autonomous non-profit organization “Russia - Land of Opportunity” for nationwide soft skills assessment among higher education students. The sample for the field test consisted of 500 people, students aged from 18 to 25 (mean = 20; standard deviation 1.8), 71% female. 67% of students are only studying and are not currently working and 500 employed adults aged from 26 to 65 (mean = 42.5; SD 9), 57% female. Analysis of the psychometric characteristics of the scales was carried out using the methods of IRT (Item Response Theory). A one-parameter rating scale model RSM (Rating scale model) and Graded Response model (GRM) of the modern testing theory were applied. GRM is a polyatomic extension of the dichotomous two-parameter model of modern testing theory (2PL) based on the cumulative logit function for modeling the probability of a correct answer. The validity of the developed scales was assessed using correlation analysis and MTMM (multitrait-multimethod matrix). The developed instrument showed good psychometric quality and can be used by HR specialists or educational management. The detailed results of a psychometric study of the quality of the instrument, including the functioning of the tasks of each scale, will be presented. Also, the results of the validity study by MTMM analysis will be discussed.

Keywords: social and emotional skills, psychometrics, MTMM, IRT

Procedia PDF Downloads 70
55 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ

Authors: Lalita, Niladri Sarkar, Subhasis Ghosh

Abstract:

Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.

Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal

Procedia PDF Downloads 54
54 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 284
53 Immobilization of Superoxide Dismutase Enzyme on Layered Double Hydroxide Nanoparticles

Authors: Istvan Szilagyi, Marko Pavlovic, Paul Rouster

Abstract:

Antioxidant enzymes are the most efficient defense systems against reactive oxygen species, which cause severe damage in living organisms and industrial products. However, their supplementation is problematic due to their high sensitivity to the environmental conditions. Immobilization on carrier nanoparticles is a promising research direction towards the improvement of their functional and colloidal stability. In that way, their applications in biomedical treatments and manufacturing processes in the food, textile and cosmetic industry can be extended. The main goal of the present research was to prepare and formulate antioxidant bionanocomposites composed of superoxide dismutase (SOD) enzyme, anionic clay (layered double hydroxide, LDH) nanoparticle and heparin (HEP) polyelectrolyte. To characterize the structure and the colloidal stability of the obtained compounds in suspension and solid state, electrophoresis, dynamic light scattering, transmission electron microscopy, spectrophotometry, thermogravimetry, X-ray diffraction, infrared and fluorescence spectroscopy were used as experimental techniques. LDH-SOD composite was synthesized by enzyme immobilization on the clay particles via electrostatic and hydrophobic interactions, which resulted in a strong adsorption of the SOD on the LDH surface, i.e., no enzyme leakage was observed once the material was suspended in aqueous solutions. However, the LDH-SOD showed only limited resistance against salt-induced aggregation and large irregularly shaped clusters formed during short term interval even at lower ionic strengths. Since sufficiently high colloidal stability is a key requirement in most of the applications mentioned above, the nanocomposite was coated with HEP polyelectrolyte to develop highly stable suspensions of primary LDH-SOD-HEP particles. HEP is a natural anticoagulant with one of the highest negative line charge density among the known macromolecules. The experimental results indicated that it strongly adsorbed on the oppositely charged LDH-SOD surface leading to charge inversion and to the formation of negatively charged LDH-SOD-HEP. The obtained hybrid materials formed stable suspension even under extreme conditions, where classical colloid chemistry theories predict rapid aggregation of the particles and unstable suspensions. Such a stabilization effect originated from electrostatic repulsion between the particles of the same sign of charge as well as from steric repulsion due to the osmotic pressure raised during the overlap of the polyelectrolyte chains adsorbed on the surface. In addition, the SOD enzyme kept its structural and functional integrity during the immobilization and coating processes and hence, the LDH-SOD-HEP bionanocomposite possessed excellent activity in decomposition of superoxide radical anions, as revealed in biochemical test reactions. In conclusion, due to the improved colloidal stability and the good efficiency in scavenging superoxide radical ions, the developed enzymatic system is a promising antioxidant candidate for biomedical or other manufacturing processes, wherever the aim is to decompose reactive oxygen species in suspensions.

Keywords: clay, enzyme, polyelectrolyte, formulation

Procedia PDF Downloads 265
52 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials

Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina

Abstract:

The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.

Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials

Procedia PDF Downloads 313
51 Application of Infrared Thermal Imaging, Eye Tracking and Behavioral Analysis for Deception Detection

Authors: Petra Hypšová, Martin Seitl

Abstract:

One of the challenges of forensic psychology is to detect deception during a face-to-face interview. In addition to the classical approaches of monitoring the utterance and its components, detection is also sought by observing behavioral and physiological changes that occur as a result of the increased emotional and cognitive load caused by the production of distorted information. Typical are changes in facial temperature, eye movements and their fixation, pupil dilation, emotional micro-expression, heart rate and its variability. Expanding technological capabilities have opened the space to detect these psychophysiological changes and behavioral manifestations through non-contact technologies that do not interfere with face-to-face interaction. Non-contact deception detection methodology is still in development, and there is a lack of studies that combine multiple non-contact technologies to investigate their accuracy, as well as studies that show how different types of lies produced by different interviewers affect physiological and behavioral changes. The main objective of this study is to apply a specific non-contact technology for deception detection. The next objective is to investigate scenarios in which non-contact deception detection is possible. A series of psychophysiological experiments using infrared thermal imaging, eye tracking and behavioral analysis with FaceReader 9.0 software was used to achieve our goals. In the laboratory experiment, 16 adults (12 women, 4 men) between 18 and 35 years of age (SD = 4.42) were instructed to produce alternating prepared and spontaneous truths and lies. The baseline of each proband was also measured, and its results were compared to the experimental conditions. Because the personality of the examiner (particularly gender and facial appearance) to whom the subject is lying can influence physiological and behavioral changes, the experiment included four different interviewers. The interviewer was represented by a photograph of a face that met the required parameters in terms of gender and facial appearance (i.e., interviewer likability/antipathy) to follow standardized procedures. The subject provided all information to the simulated interviewer. During follow-up analyzes, facial temperature (main ROIs: forehead, cheeks, the tip of the nose, chin, and corners of the eyes), heart rate, emotional expression, intensity and fixation of eye movements and pupil dilation were observed. The results showed that the variables studied varied with respect to the production of prepared truths and lies versus the production of spontaneous truths and lies, as well as the variability of the simulated interviewer. The results also supported the assumption of variability in physiological and behavioural values during the subject's resting state, the so-called baseline, and the production of prepared and spontaneous truths and lies. A series of psychophysiological experiments provided evidence of variability in the areas of interest in the production of truths and lies to different interviewers. The combination of technologies used also led to a comprehensive assessment of the physiological and behavioral changes associated with false and true statements. The study presented here opens the space for further research in the field of lie detection with non-contact technologies.

Keywords: emotional expression decoding, eye-tracking, functional infrared thermal imaging, non-contact deception detection, psychophysiological experiment

Procedia PDF Downloads 97