Search results for: histological features
221 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming
Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter
Abstract:
High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.Keywords: hyperelastic, anisotropic, polymer film, thermoforming
Procedia PDF Downloads 616220 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables
Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez
Abstract:
Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X
Procedia PDF Downloads 263219 The Roles of Mandarin and Local Dialect in the Acquisition of L2 English Consonants Among Chinese Learners of English: Evidence From Suzhou Dialect Areas
Authors: Weijing Zhou, Yuting Lei, Francis Nolan
Abstract:
In the domain of second language acquisition, whenever pronunciation errors or acquisition difficulties are found, researchers habitually attribute them to the negative transfer of the native language or local dialect. To what extent do Mandarin and local dialects affect English phonological acquisition for Chinese learners of English as a foreign language (EFL)? Little evidence, however, has been found via empirical research in China. To address this core issue, the present study conducted phonetic experiments to explore the roles of local dialects and Mandarin in Chinese EFL learners’ acquisition of L2 English consonants. Besides Mandarin, the sole national language in China, Suzhou dialect was selected as the target local dialect because of its distinct phonology from Mandarin. The experimental group consisted of 30 junior English majors at Yangzhou University, who were born and lived in Suzhou, acquired Suzhou Dialect since their early childhood, and were able to communicate freely and fluently with each other in Suzhou Dialect, Mandarin as well as English. The consonantal target segments were all the consonants of English, Mandarin and Suzhou Dialect in typical carrier words embedded in the carrier sentence Say again. The control group consisted of two Suzhou Dialect experts, two Mandarin radio broadcasters, and two British RP phoneticians, who served as the standard speakers of the three languages. The reading corpus was recorded and sampled in the phonetic laboratories at Yangzhou University, Soochow University and Cambridge University, respectively, then transcribed, segmented and analyzed acoustically via Praat software, and finally analyzed statistically via EXCEL and SPSS software. The main findings are as follows: First, in terms of correct acquisition rates (CARs) of all the consonants, Mandarin ranked top (92.83%), English second (74.81%) and Suzhou Dialect last (70.35%), and significant differences were found only between the CARs of Mandarin and English and between the CARs of Mandarin and Suzhou Dialect, demonstrating Mandarin was overwhelmingly more robust than English or Suzhou Dialect in subjects’ multilingual phonological ecology. Second, in terms of typical acoustic features, the average duration of all the consonants plus the voice onset time (VOT) of plosives, fricatives, and affricatives in 3 languages were much longer than those of standard speakers; the intensities of English fricatives and affricatives were higher than RP speakers but lower than Mandarin and Suzhou Dialect standard speakers; the formants of English nasals and approximants were significantly different from those of Mandarin and Suzhou Dialects, illustrating the inconsistent acoustic variations between the 3 languages. Thirdly, in terms of typical pronunciation variations or errors, there were significant interlingual interactions between the 3 consonant systems, in which Mandarin consonants were absolutely dominant, accounting for the strong transfer from L1 Mandarin to L2 English instead of from earlier-acquired L1 local dialect to L2 English. This is largely because the subjects were knowingly exposed to Mandarin since their nursery and were strictly required to speak in Mandarin through all the formal education periods from primary school to university.Keywords: acquisition of L2 English consonants, role of Mandarin, role of local dialect, Chinese EFL learners from Suzhou Dialect areas
Procedia PDF Downloads 94218 Identification of Hub Genes in the Development of Atherosclerosis
Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia
Abstract:
Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics
Procedia PDF Downloads 65217 Ensuring Safety in Fire Evacuation by Facilitating Way-Finding in Complex Buildings
Authors: Atefeh Omidkhah, Mohammadreza Bemanian
Abstract:
The issue of way-finding earmarks a wide range of literature in architecture and despite the 50 year background of way-finding studies, it still lacks a comprehensive theory for indoor settings. Way-finding has a notable role in emergency evacuation as well. People in the panic situation of a fire emergency need to find the safe egress route correctly and in as minimum time as possible. In this regard the parameters of an appropriate way-finding are mentioned in the evacuation related researches albeit scattered. This study reviews the fire safety related literature to extract a way-finding related framework for architectural purposes of the design of a safe evacuation route. In this regard a research trend review in addition with applied methodological approaches review is conducted. Then by analyzing eight original researches related to way-finding parameters in fire evacuation, main parameters that affect way-finding in emergency situation of a fire incident are extracted and a framework was developed based on them. Results show that the issues related to exit route and emergency evacuation can be chased in task oriented studies of way-finding. This research trend aims to access a high-level framework and in the best condition a theory that has an explanatory capability to define differences in way-finding in indoor/outdoor settings, complex/simple buildings and different building types or transitional spaces. The methodological advances demonstrate the evacuation way-finding researches in line with three approaches that the latter one is the most up-to-date and precise method to research this subject: real actors and hypothetical stimuli as in evacuation experiments, hypothetical actors and stimuli as in agent-based simulations and real actors and semi-real stimuli as in virtual reality environment by adding multi-sensory simulation. Findings on data-mining of 8 sample of original researches in way-finding in evacuation indicate that emergency way-finding design of a building should consider two level of space cognition problems in the time of emergency and performance consequences of them in the built environment. So four major classes of problems in way-finding which are visual information deficiency, confusing layout configuration, improper navigating signage and demographic issues had been defined and discussed as the main parameters that should be provided with solutions in design and interior of a building. In the design phase of complex buildings, which face more reported problem in way-finding, it is important to consider the interior components regarding to the building type of occupancy and behavior of its occupants and determine components that tend to become landmarks and set the architectural features of egress route in line with the directions that they navigate people. Research on topological cognition of environmental and its effect on way-finding task in emergency evacuation is proposed for future.Keywords: architectural design, egress route, way-finding, fire safety, evacuation
Procedia PDF Downloads 172216 Augmented and Virtual Reality Experiences in Plant and Agriculture Science Education
Authors: Sandra Arango-Caro, Kristine Callis-Duehl
Abstract:
The Education Research and Outreach Lab at the Donald Danforth Plant Science Center established the Plant and Agriculture Augmented and Virtual Reality Learning Laboratory (PAVRLL) to promote science education through professional development, school programs, internships, and outreach events. Professional development is offered to high school and college science and agriculture educators on the use and applications of zSpace and Oculus platforms. Educators learn to use, edit, or create lesson plans in the zSpace platform that are aligned with the Next Generation Science Standards. They also learn to use virtual reality experiences created by the PAVRLL available in Oculus (e.g. The Soybean Saga). Using a cost-free loan rotation system, educators can bring the AVR units to the classroom and offer AVR activities to their students. Each activity has user guides and activity protocols for both teachers and students. The PAVRLL also offers activities for 3D plant modeling. High school students work in teams of art-, science-, and technology-oriented students to design and create 3D models of plant species that are under research at the Danforth Center and present their projects at scientific events. Those 3D models are open access through the zSpace platform and are used by PAVRLL for professional development and the creation of VR activities. Both teachers and students acquire knowledge of plant and agriculture content and real-world problems, gain skills in AVR technology, 3D modeling, and science communication, and become more aware and interested in plant science. Students that participate in the PAVRLL activities complete pre- and post-surveys and reflection questions that evaluate interests in STEM and STEM careers, students’ perceptions of three design features of biology lab courses (collaboration, discovery/relevance, and iteration/productive failure), plant awareness, and engagement and learning in AVR environments. The PAVRLL was established in the fall of 2019, and since then, it has trained 15 educators, three of which will implement the AVR programs in the fall of 2021. Seven students have worked in the 3D plant modeling activity through a virtual internship. Due to the COVID-19 pandemic, the number of teachers trained, and classroom implementations have been very limited. It is expected that in the fall of 2021, students will come back to the schools in person, and by the spring of 2022, the PAVRLL activities will be fully implemented. This will allow the collection of enough data on student assessments that will provide insights on benefits and best practices for the use of AVR technologies in the classrooms. The PAVRLL uses cutting-edge educational technologies to promote science education and assess their benefits and will continue its expansion. Currently, the PAVRLL is applying for grants to create its own virtual labs where students can experience authentic research experiences using real Danforth research data based on programs the Education Lab already used in classrooms.Keywords: assessment, augmented reality, education, plant science, virtual reality
Procedia PDF Downloads 172215 Optimizing Usability Testing with Collaborative Method in an E-Commerce Ecosystem
Authors: Markandeya Kunchi
Abstract:
Usability testing (UT) is one of the vital steps in the User-centred design (UCD) process when designing a product. In an e-commerce ecosystem, UT becomes primary as new products, features, and services are launched very frequently. And, there are losses attached to the company if an unusable and inefficient product is put out to market and is rejected by customers. This paper tries to answer why UT is important in the product life-cycle of an E-commerce ecosystem. Secondary user research was conducted to find out work patterns, development methods, type of stakeholders, and technology constraints, etc. of a typical E-commerce company. Qualitative user interviews were conducted with product managers and designers to find out the structure, project planning, product management method and role of the design team in a mid-level company. The paper tries to address the usual apprehensions of the company to inculcate UT within the team. As well, it stresses upon factors like monetary resources, lack of usability expert, narrow timelines, and lack of understanding of higher management as some primary reasons. Outsourcing UT to vendors is also very prevalent with mid-level e-commerce companies, but it has its own severe repercussions like very little team involvement, huge cost, misinterpretation of the findings, elongated timelines, and lack of empathy towards the customer, etc. The shortfalls of the unavailability of a UT process in place within the team and conducting UT through vendors are bad user experiences for customers while interacting with the product, badly designed products which are neither useful and nor utilitarian. As a result, companies see dipping conversions rates in apps and websites, huge bounce rates and increased uninstall rates. Thus, there was a need for a more lean UT system in place which could solve all these issues for the company. This paper highlights on optimizing the UT process with a collaborative method. The degree of optimization and structure of collaborative method is the highlight of this paper. Collaborative method of UT is one in which the centralised design team of the company takes for conducting and analysing the UT. The UT is usually a formative kind where designers take findings into account and uses in the ideation process. The success of collaborative method of UT is due to its ability to sync with the product management method employed by the company or team. The collaborative methods focus on engaging various teams (design, marketing, product, administration, IT, etc.) each with its own defined roles and responsibility in conducting a smooth UT with users In-house. The paper finally highlights the positive results of collaborative UT method after conducting more than 100 In-lab interviews with users across the different lines of businesses. Some of which are the improvement of interaction between stakeholders and the design team, empathy towards users, improved design iteration, better sanity check of design solutions, optimization of time and money, effective and efficient design solution. The future scope of collaborative UT is to make this method leaner, by reducing the number of days to complete the entire project starting from planning between teams to publishing the UT report.Keywords: collaborative method, e-commerce, product management method, usability testing
Procedia PDF Downloads 118214 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor
Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar
Abstract:
Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration
Procedia PDF Downloads 189213 The Predictive Power of Successful Scientific Theories: An Explanatory Study on Their Substantive Ontologies through Theoretical Change
Authors: Damian Islas
Abstract:
Debates on realism in science concern two different questions: (I) whether the unobservable entities posited by theories can be known; and (II) whether any knowledge we have of them is objective or not. Question (I) arises from the doubt that since observation is the basis of all our factual knowledge, unobservable entities cannot be known. Question (II) arises from the doubt that since scientific representations are inextricably laden with the subjective, idiosyncratic, and a priori features of human cognition and scientific practice, they cannot convey any reliable information on how their objects are in themselves. A way of understanding scientific realism (SR) is through three lines of inquiry: ontological, semantic, and epistemological. Ontologically, scientific realism asserts the existence of a world independent of human mind. Semantically, scientific realism assumes that theoretical claims about reality show truth values and, thus, should be construed literally. Epistemologically, scientific realism believes that theoretical claims offer us knowledge of the world. Nowadays, the literature on scientific realism has proceeded rather far beyond the realism versus antirealism debate. This stance represents a middle-ground position between the two according to which science can attain justified true beliefs concerning relational facts about the unobservable realm but cannot attain justified true beliefs concerning the intrinsic nature of any objects occupying that realm. That is, the structural content of scientific theories about the unobservable can be known, but facts about the intrinsic nature of the entities that figure as place-holders in those structures cannot be known. There are two possible versions of SR: Epistemological Structural Realism (ESR) and Ontic Structural Realism (OSR). On ESR, an agnostic stance is preserved with respect to the natures of unobservable entities, but the possibility of knowing the relations obtaining between those entities is affirmed. OSR includes the rather striking claim that when it comes to the unobservables theorized about within fundamental physics, relations exist, but objects do not. Focusing on ESR, questions arise concerning its ability to explain the empirical success of a theory. Empirical success certainly involves predictive success, and predictive success implies a theory’s power to make accurate predictions. But a theory’s power to make any predictions at all seems to derive precisely from its core axioms or laws concerning unobservable entities and mechanisms, and not simply the sort of structural relations often expressed in equations. The specific challenge to ESR concerns its ability to explain the explanatory and predictive power of successful theories without appealing to their substantive ontologies, which are often not preserved by their successors. The response to this challenge will depend on the various and subtle different versions of ESR and OSR stances, which show a sort of progression through eliminativist OSR to moderate OSR of gradual increase in the ontological status accorded to objects. Knowing the relations between unobserved entities is methodologically identical to assert that these relations between unobserved entities exist.Keywords: eliminativist ontic structural realism, epistemological structuralism, moderate ontic structural realism, ontic structuralism
Procedia PDF Downloads 117212 Short and Long Crack Growth Behavior in Ferrite Bainite Dual Phase Steels
Authors: Ashok Kumar, Shiv Brat Singh, Kalyan Kumar Ray
Abstract:
There is growing awareness to design steels against fatigue damage Ferrite martensite dual-phase steels are known to exhibit favourable mechanical properties like good strength, ductility, toughness, continuous yielding, and high work hardening rate. However, dual-phase steels containing bainite as second phase are potential alternatives for ferrite martensite steels for certain applications where good fatigue property is required. Fatigue properties of dual phase steels are popularly assessed by the nature of variation of crack growth rate (da/dN) with stress intensity factor range (∆K), and the magnitude of fatigue threshold (∆Kth) for long cracks. There exists an increased emphasis to understand not only the long crack fatigue behavior but also short crack growth behavior of ferrite bainite dual phase steels. The major objective of this report is to examine the influence of microstructures on the short and long crack growth behavior of a series of developed dual-phase steels with varying amounts of bainite and. Three low carbon steels containing Nb, Cr and Mo as microalloying elements steels were selected for making ferrite-bainite dual-phase microstructures by suitable heat treatments. The heat treatment consisted of austenitizing the steel at 1100°C for 20 min, cooling at different rates in air prior to soaking these in a salt bath at 500°C for one hour, and finally quenching in water. Tensile tests were carried out on 25 mm gauge length specimens with 5 mm diameter using nominal strain rate 0.6x10⁻³ s⁻¹ at room temperature. Fatigue crack growth studies were made on a recently developed specimen configuration using a rotating bending machine. The crack growth was monitored by interrupting the test and observing the specimens under an optical microscope connected to an Image analyzer. The estimated crack lengths (a) at varying number of cycles (N) in different fatigue experiments were analyzed to obtain log da/dN vs. log °∆K curves for determining ∆Kthsc. The microstructural features of these steels have been characterized and their influence on the near threshold crack growth has been examined. This investigation, in brief, involves (i) the estimation of ∆Kthsc and (ii) the examination of the influence of microstructure on short and long crack fatigue threshold. The maximum fatigue threshold values obtained from short crack growth experiments on various specimens of dual-phase steels containing different amounts of bainite are found to increase with increasing bainite content in all the investigated steels. The variations of fatigue behavior of the selected steel samples have been explained with the consideration of varying amounts of the constituent phases and their interactions with the generated microstructures during cyclic loading. Quantitative estimation of the different types of fatigue crack paths indicates that the propensity of a crack to pass through the interfaces depends on the relative amount of the microstructural constituents. The fatigue crack path is found to be predominantly intra-granular except for the ones containing > 70% bainite in which it is predominantly inter-granular.Keywords: bainite, dual phase steel, fatigue crack growth rate, long crack fatigue threshold, short crack fatigue threshold
Procedia PDF Downloads 201211 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems
Authors: Ramprasad Srinivasan
Abstract:
Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation
Procedia PDF Downloads 65210 An Agent-Based Approach to Examine Interactions of Firms for Investment Revival
Authors: Ichiro Takahashi
Abstract:
One conundrum that macroeconomic theory faces is to explain how an economy can revive from depression, in which the aggregate demand has fallen substantially below its productive capacity. This paper examines an autonomous stabilizing mechanism using an agent-based Wicksell-Keynes macroeconomic model. This paper focuses on the effects of the number of firms and the length of the gestation period for investment that are often assumed to be one in a mainstream macroeconomic model. The simulations found the virtual economy was highly unstable, or more precisely, collapsing when these parameters are fixed at one. This finding may even suggest us to question the legitimacy of these common assumptions. A perpetual decline in capital stock will eventually encourage investment if the capital stock is short-lived because an inactive investment will result in insufficient productive capacity. However, for an economy characterized by a roundabout production method, a gradual decline in productive capacity may not be able to fall below the aggregate demand that is also shrinking. Naturally, one would then ask if our economy cannot rely on an external stimulus such as population growth and technological progress to revive investment, what factors would provide such a buoyancy for stimulating investments? The current paper attempts to answer this question by employing the artificial macroeconomic model mentioned above. The baseline model has the following three features: (1) the multi-period gestation for investment, (2) a large number of heterogeneous firms, (3) demand-constrained firms. The instability is a consequence of the following dynamic interactions. (a) A multiple-period gestation period means that once a firm starts a new investment, it continues to invest over some subsequent periods. During these gestation periods, the excess demand created by the investing firm will spill over to ignite new investment of other firms that are supplying investment goods: the presence of multi-period gestation for investment provides a field for investment interactions. Conversely, the excess demand for investment goods tends to fade away before it develops into a full-fledged boom if the gestation period of investment is short. (b) A strong demand in the goods market tends to raise the price level, thereby lowering real wages. This reduction of real wages creates two opposing effects on the aggregate demand through the following two channels: (1) a reduction in the real labor income, and (2) an increase in the labor demand due to the principle of equality between the marginal labor productivity and real wage (referred as the Walrasian labor demand). If there is only a single firm, a lower real wage will increase its Walrasian labor demand, thereby an actual labor demand tends to be determined by the derived labor demand. Thus, the second positive effect would not work effectively. In contrast, for an economy with a large number of firms, Walrasian firms will increase employment. This interaction among heterogeneous firms is a key for stability. A single firm cannot expect the benefit of such an increased aggregate demand from other firms.Keywords: agent-based macroeconomic model, business cycle, demand constraint, gestation period, representative agent model, stability
Procedia PDF Downloads 162209 Comparison of Incidence and Risk Factors of Early Onset and Late Onset Preeclampsia: A Population Based Cohort Study
Authors: Sadia Munir, Diana White, Aya Albahri, Pratiwi Hastania, Eltahir Mohamed, Mahmood Khan, Fathima Mohamed, Ayat Kadhi, Haila Saleem
Abstract:
Preeclampsia is a major complication of pregnancy. Prediction and management of preeclampsia is a challenge for obstetricians. To our knowledge, no major progress has been achieved in the prevention and early detection of preeclampsia. There is very little known about the clear treatment path of this disorder. Preeclampsia puts both mother and baby at risk of several short term- and long term-health problems later in life. There is huge health service cost burden in the health care system associated with preeclampsia and its complications. Preeclampsia is divided into two different types. Early onset preeclampsia develops before 34 weeks of gestation, and late onset develops at or after 34 weeks of gestation. Different genetic and environmental factors, prognosis, heritability, biochemical and clinical features are associated with early and late onset preeclampsia. Prevalence of preeclampsia greatly varies all over the world and is dependent on ethnicity of the population and geographic region. To authors best knowledge, no published data on preeclampsia exist in Qatar. In this study, we are reporting the incidence of preeclampsia in Qatar. The purpose of this study is to compare the incidence and risk factors of both early onset and late onset preeclampsia in Qatar. This retrospective longitudinal cohort study was conducted using data from the hospital record of Women’s Hospital, Hamad Medical Corporation (HMC), from May 2014-May 2016. Data collection tool, which was approved by HMC, was a researcher made extraction sheet that included information such as blood pressure during admission, socio demographic characteristics, delivery mode, and new born details. A total of 1929 patients’ files were identified by the hospital information management when they apply codes of preeclampsia. Out of 1929 files, 878 had significant gestational hypertension without proteinuria, 365 had preeclampsia, 364 had severe preeclampsia, and 188 had preexisting hypertension with superimposed proteinuria. In this study, 78% of the data was obtained by hospital electronic system (Cerner) and the remaining 22% was from patient’s paper records. We have gone through detail data extraction from 560 files. Initial data analysis has revealed that 15.02% of pregnancies were complicated with preeclampsia from May 2014-May 2016. We have analyzed difference in the two different disease entities in the ethnicity, maternal age, severity of hypertension, mode of delivery and infant birth weight. We have identified promising differences in the risk factors of early onset and late onset preeclampsia. The data from clinical findings of preeclampsia will contribute to increased knowledge about two different disease entities, their etiology, and similarities/differences. The findings of this study can also be used in predicting health challenges, improving health care system, setting up guidelines, and providing the best care for women suffering from preeclampsia.Keywords: preeclampsia, incidence, risk factors, maternal
Procedia PDF Downloads 140208 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land
Authors: Elissandro Voigt Beier, Cristiano Poleto
Abstract:
The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land
Procedia PDF Downloads 276207 Fabrication of High Energy Hybrid Capacitors from Biomass Waste-Derived Activated Carbon
Authors: Makhan Maharjan, Mani Ulaganathan, Vanchiappan Aravindan, Srinivasan Madhavi, Jing-Yuan Wang, Tuti Mariana Lim
Abstract:
There is great interest to exploit sustainable, low-cost, renewable resources as carbon precursors for energy storage applications. Research on development of energy storage devices has been growing rapidly due to mismatch in power supply and demand from renewable energy sources This paper reported the synthesis of porous activated carbon from biomass waste and evaluated its performance in supercapicators. In this work, we employed orange peel (waste material) as the starting material and synthesized activated carbon by pyrolysis of KOH impregnated orange peel char at 800 °C in argon atmosphere. The resultant orange peel-derived activated carbon (OP-AC) exhibited a high BET surface area of 1,901 m2 g-1, which is the highest surface area so far reported for the orange peel. The pore size distribution (PSD) curve exhibits the pores centered at 11.26 Å pore width, suggesting dominant microporosity. The OP-AC was studied as positive electrode in combination with different negative electrode materials, such as pre-lithiated graphite (LiC6) and Li4Ti5O12 for making different hybrid capacitors. The lithium ion capacitor (LIC) fabricated using OP-AC with pre-lithiated graphite delivered a high energy density of ~106 Wh kg–1. The energy density for OP-AC||Li4Ti5O12 capacitor was ~35 Wh kg–1. For comparison purpose, configuration of OP-AC||OP-AC capacitors were studied in both aqueous (1M H2SO4) and organic (1M LiPF6 in EC-DMC) electrolytes, which delivered the energy density of 6.6 Wh kg-1 and 16.3 Wh kg-1, respectively. The cycling retentions obtained at current density of 1 A g–1 were ~85.8, ~87.0 ~82.2 and ~58.8% after 2500 cycles for OP-AC||OP-AC (aqueous), OP-AC||OP-AC (organic), OP-AC||Li4Ti5O12 and OP-AC||LiC6 configurations, respectively. In addition, characterization studies were performed by elemental and proximate composition, thermogravimetry, field emission-scanning electron microscopy, Raman spectra, X-ray diffraction (XRD) pattern, Fourier transform-infrared, X-ray photoelectron spectroscopy (XPS) and N2 sorption isotherms. The morphological features from FE-SEM exhibited well-developed porous structures. Two typical broad peaks observed in the XRD framework of the synthesized carbon implies amorphous graphitic structure. The ratio of 0.86 for ID/IG in Raman spectra infers high degree of graphitization in the sample. The band spectra of C 1s in XPS display the well resolved peaks related to carbon atoms in various chemical environments; for instances, the characteristics binding energies appeared at ~283.83, ~284.83, ~286.13, ~288.56, and ~290.70 eV which correspond to sp2 -graphitic C, sp3 -graphitic C, C-O, C=O and π-π*, respectively. Characterization studies revealed the synthesized carbon to be promising electrode material towards the application for energy storage devices. The findings opened up the possibility of developing high energy LICs from abundant, low-cost, renewable biomass waste.Keywords: lithium-ion capacitors, orange peel, pre-lithiated graphite, supercapacitors
Procedia PDF Downloads 241206 Nutritional Genomics Profile Based Personalized Sport Nutrition
Authors: Eszter Repasi, Akos Koller
Abstract:
Our genetic information determines our look, physiology, sports performance and all our features. Maximizing the performances of athletes have adopted a science-based approach to the nutritional support. Nowadays genetics studies have blended with nutritional sciences, and a dynamically evolving, new research field have appeared. Nutritional genomics is needed to be used by nutritional experts. This is a recent field of nutritional science, which can provide a solution to reach the best sport performance using correlations between the athlete’s genome, nutritions, molecules, included human microbiome (links between food, microbiome and epigenetics), nutrigenomics and nutrigenetics. Nutritional genomics has a tremendous potential to change the future of dietary guidelines and personal recommendations. Experts need to use new technology to get information about the athletes, like nutritional genomics profile (included the determination of the oral and gut microbiome and DNA coded reaction for food components), which can modify the preparation term and sports performance. The influence of nutrients on the genes expression is called Nutrigenomics. The heterogeneous response of gene variants to nutrients, dietary components is called Nutrigenetics. The human microbiome plays a critical role in the state of health and well-being, and there are more links between food or nutrition and the human microbiome composition, which can develop diseases and epigenetic changes as well. A nutritional genomics-based profile of athletes can be the best technic for a dietitian to make a unique sports nutrition diet plan. Using functional food and the right food components can be effected on health state, thus sports performance. Scientists need to determine the best response, due to the effect of nutrients on health, through altering genome promote metabolites and result changes in physiology. Nutritional biochemistry explains why polymorphisms in genes for the absorption, circulation, or metabolism of essential nutrients (such as n-3 polyunsaturated fatty acids or epigallocatechin-3-gallate), would affect the efficacy of that nutrient. Controlled nutritional deficiencies and failures, prevented the change of health state or a newly discovered food intolerance are observed by a proper medical team, can support better sports performance. It is important that the dietetics profession informed on gene-diet interactions, that may be leading to optimal health, reduced risk of injury or disease. A special medical application for documentation and monitoring of data of health state and risk factors can uphold and warn the medical team for an early action and help to be able to do a proper health service in time. This model can set up a personalized nutrition advice from the status control, through the recovery, to the monitoring. But more studies are needed to understand the mechanisms and to be able to change the composition of the microbiome, environmental and genetic risk factors in cases of athletes.Keywords: gene-diet interaction, multidisciplinary team, microbiome, diet plan
Procedia PDF Downloads 169205 Accessible Facilities in Home Environment for Elderly Family Members in Sri Lanka
Authors: M. A. N. Rasanjalee Perera
Abstract:
The world is facing several problems due to increasing elderly population. In Sri Lanka, along with the complexity of the modern society and structural and functional changes of the family, “caring for elders” seems as an emerging social problem. This situation may intensify as the county is moving into a middle income society. Seeking higher education and related career opportunities, and urban living in modern housing are new trends, through which several problems are generated. Among many issues related with elders, “lack of accessible and appropriate facilities in their houses as well as public buildings” can be identified as a major problem. This study argues that welfare facilities provided for the elderly people, particularly in the home environment, in the country are not adequate. Modern housing features such as bathrooms, pantries, lobbies, and leisure areas etc. are questionable as to whether they match with elders’ physical and mental needs. Consequently, elders have to face domestic accidents and many other difficulties within their living environments. Records of hospitals in the country also proved this fact. Therefore, this study tries to identify how far modern houses are suited with elders’ needs. The study further questioned whether “aging” is a considerable matter when people are buying, planning and renovating houses. A randomly selected sample of 50 houses were observed and 50 persons were interviewed around the Maharagama urban area in Colombo district to obtain primary data, while relevant secondary data and information were used to have a depth analysis. The study clearly found that none of the houses included to the sample are considering elders’ needs in planning, renovating, or arranging the home. Instead, most of the families were giving priority to the rich and elegant appearance and modern facilities of the houses. Particularly, to the bathrooms, pantry, large setting areas, balcony, parking slots for two vehicles, ad parapet walls with roller-gates are the main concerns. A significant factor found here is that even though, many children of the aged are in middle age and reaching their older years at present, they do not plan their future living within a safe and comfortable home, despite that they are hoping to spent the latter part of their lives in the their current homes. This fact highlights that not only the other responsible parts of the society, but also those who are reaching their older ages are ignoring the problems of the aged. At the same time, it was found that more than 80% of old parents do not like to stay at their children’s homes as the living environments in such modern homes are not familiar or convenient for them. Due to this context, the aged in Sri Lanka may have to be alone in their own homes due to current trend of society of migrating to urban living in modern houses. At the same time, current urban families who live in modern houses may have to face adding accessible facilities in their home environment, as current modern housing facilities may not be appropriate them for a better life in their latter part of life.Keywords: aging population, elderly care, home environment, housing facilities
Procedia PDF Downloads 125204 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR
Authors: Ionut Vintu, Stefan Laible, Ruth Schulz
Abstract:
Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection
Procedia PDF Downloads 139203 A Multipurpose Inertial Electrostatic Magnetic Confinement Fusion for Medical Isotopes Production
Authors: Yasser R. Shaban
Abstract:
A practical multipurpose device for medical isotopes production is most wanted for clinical centers and researches. Unfortunately, the major supply of these radioisotopes currently comes from aging sources, and there is a great deal of uneasiness in the domestic market. There are also many cases where the cost of certain radioisotopes is too high for their introduction on a commercial scale even though the isotopes might have great benefits for society. The medical isotopes such as radiotracers PET (Positron Emission Tomography), Technetium-99 m, and Iodine-131, Lutetium-177 by is feasible to be generated by a single unit named IEMC (Inertial Electrostatic Magnetic Confinement). The IEMC fusion vessel is the upgrading unit of the Inertial Electrostatic Confinement IEC fusion vessel. Comprehensive experimental works on IEC were carried earlier with promising results. The principle of inertial electrostatic magnetic confinement IEMC fusion is based on forcing the binary fuel ions to interact in the opposite directions in ions cyclotrons orbits with different kinetic energies in order to have equal compression (forces) and with different ion cyclotron frequency ω in order to increase the rate of intersection. The IEMC features greater fusion volume than IEC by several orders of magnitude. The particles rate from the IEMC approach are projected to be 8.5 x 10¹¹ (p/s), ~ 0.2 microampere proton, for D/He-3 fusion reaction and 4.2 x 10¹² (n/s) for D/T fusion reaction. The projected values of particles yield (neutrons and protons) are suitable for medical isotope productions on-site by a single unit without any change in the fusion vessel but only the fuel gas. The PET radiotracers are usually produced on-site by medical ion accelerator whereas Technetium-99m (Tc-99m) is usually produced off-site from the irradiation facilities of nuclear power plants. Typically, hospitals receive molybdenum-99 isotope container; the isotope decays to Tc-99mwith half-life time 2.75 days. Even though the projected current from IEMC is lesser than the proton current from the medical ion accelerator but still the IEMC vessel is simpler, and reduced in components and power consumption which add a new value of populating the PET radiotracers in most clinical centers. On the other hand, the projected neutrons flux from the IEMC is lesser than the thermal neutron flux at the irradiation facilities of nuclear power plants, but in the IEMC case the productions of Technetium-99m is suggested to be at the resonance region of which the resonance integral cross section is two orders of magnitude higher than the thermal flux. Thus it can be said the net activity from both is evened. Besides, the particle accelerator cannot be considered a multipurpose particles production unless a significant change is made to the accelerator to change from neutrons mode to protons mode or vice versa. In conclusion, the projected fusion yield from IEMC is a straightforward since slightly change in the primer IEC and ion source is required.Keywords: electrostatic versus magnetic confinement fusion vessel, ion source, medical isotopes productions, neutron activation
Procedia PDF Downloads 342202 The Publishing Process and Results of the Chinese Annotated Edition of John Dewey’s “Experience and Education: The 60th Anniversary Edition”
Authors: Wen-jing Shan
Abstract:
The Chinese annotated edition of “Experience and education: The 60th anniversary edition,” originally written in English by John Dewey (1859-1952), was published in 2015 by this author. A report of the process and results of the translation and annotation of the book is the purpose of this paper. It is worth mentioning that the original 1938 edition was considered as the best concise statement on education by John Dewey, one the most important educational theorists of the twentieth century. One of the features of this The 60th anniversary edition is that the original publisher, Kappa Delta Pi International Honor Society, invited four contemporary Deweyan scholars who had been awarded the Society’s Laureate Scholar to write a review of the book published by Dewey, who was the first to receive this honor. The four scholars are Maxine Greene(1917-2014), Philip W. Jackson(1928-2015), Linda Darling-Hammond(1951-), and O. L. Davis, Jr.(1928-). The original 1938 edition, the best concise statement on education by the most important educational theorist of the twentieth century, was translated into Chinese for five times after its publication in the U.S.A, three in the 1940s, one in the 1990s, and one in 2010s. Nonetheless, the five translations have few or no annotations and have some flaws of mis-interpretations and lack of information. The author retranslated and annotated the book to make the interpretations more faithful, expressive, and elegant, and providing the readers with more understanding and more correct information. This author started the project of translation and annotation sponsored by Taiwan Ministry of Science and Technology in August 2011 and finished and published by July 2015. The work, the author, did was divided into three stages. First, in the preparatory stage of the project, the summary of each chapter, the rationale of the book, the textual commentary, the development of the original and Chinese editions, and reviews and criticisms, as well as Dewey’s biography and bibliography were initially investigated. Secondly, on the basis of the above preliminary work, the translation with annotation of Experience and Education, an epitome of Dewey’s biography and bibliography, a chronology, and a critical introduction for the Experience and Education were written. In the critical introduction, Dewey’s philosophy of experience and educational ideas will be examined along the timeline of human thought. And the vast literature about Dewey and his work will be instrumental to reveal the historical significance of Experience and Education on the modern age and make the critical introduction more knowledgeable. Third, the final stage took another two years to review and revise the draft of the work and send it for publication. There are two parts in the book. The first part is a scholarly introduction including Dewey’s chronicle (in short form), Dewey’s mind, people and life, the importance of “Experience and education”, the necessity of re-translation and re-annotation of “Experience and education” into Chinese. The second part is the re-translation and re-annotation version, including Dewey’s “Experience and education” and four papers written by contemporary scholars.Keywords: John Dewey, experience and education: the 60th anniversary edition, translation, annotation
Procedia PDF Downloads 159201 Real-Time Neuroimaging for Rehabilitation of Stroke Patients
Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge
Abstract:
Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation
Procedia PDF Downloads 385200 Examining the Effects of Ticket Bundling Strategies and Team Identification on Purchase of Hedonic and Utilitarian Options
Authors: Young Ik Suh, Tywan G. Martin
Abstract:
Bundling strategy is a common marketing practice today. In the past decades, both academicians and practitioners have increasingly emphasized the strategic importance of bundling in today’s markets. The reason for increased interest in bundling strategy is that they normally believe that it can significantly increase profits on organization’s sales over time and it is convenient for the customer. However, little efforts has been made on ticket bundling and purchase considerations in hedonic and utilitarian options in sport consumer behavior context. Consumers often face choices between utilitarian and hedonic alternatives in decision making. When consumers purchase certain products, they are only interested in the functional dimensions, which are called utilitarian dimensions. On the other hand, others focus more on hedonic features such as fun, excitement, and pleasure. Thus, the current research examines how utilitarian and hedonic consumption can vary in typical ticket purchasing process. The purpose of this research is to understand the following two research themes: (1) the differential effect of discount framing on ticket bundling: utilitarian and hedonic options and (2) moderating effect of team identification on ticket bundling. In order to test the research hypotheses, an experimental study using a two-way ANOVA, 3 (team identification: low, medium, and high) X 2 (discount frame: ticket bundle sales with utilitarian product, and hedonic product), with mixed factorial design will be conducted to determine whether there is a statistical significance between purchasing intentions of two discount frames of ticket bundle sales within different team identification levels. To compare mean differences among the two different settings, we will create two conditions of ticket bundles: (1) offering a discount on a ticket ($5 off) if they would purchase it along with utilitarian product (e.g., iPhone8 case, t-shirt, cap), and (2) offering a discount on a ticket ($5 off) if they would purchase it along with hedonic product (e.g., pizza, drink, fans featured on big screen). The findings of the current ticket bundling study are expected to have many theoretical and practical contributions and implications by extending the research and literature pertaining to the relationship between team identification and sport consumer behavior. Specifically, this study can provide a reliable and valid framework to understanding the role of team identification as a moderator on behavioral intentions such as purchase intentions. From an academic perspective, the study will be the first known attempt to understand consumer reactions toward different discount frames related to ticket bundling. Even though the game ticket itself is the major commodity of sport event attendance and significantly related to teams’ revenue streams, most recent ticket pricing research has been done in terms of economic or cost-oriented pricing and not from a consumer psychological perspective. For sport practitioners, this study will also provide significant implications. The result will imply that sport marketers may need to develop two different ticketing promotions for loyal fan and non-loyal fans. Since loyal fans concern ticket price than tie-in products when they see ticket bundle sales, advertising campaign should be more focused on discounting ticket price.Keywords: ticket bundling, hedonic, utilitarian, team identification
Procedia PDF Downloads 166199 Management of Urine Recovery at the Building Level
Authors: Joao Almeida, Ana Azevedo, Myriam Kanoun-Boule, Maria Ines Santos, Antonio Tadeu
Abstract:
The effects of the increasing expansion of cities and climate changes have encouraged European countries and regions to adopt nature-based solutions with ability to mitigate environmental issues and improve life in cities. Among these strategies, green roofs and urban gardens have been considered ingenious solutions, since they have the desirable potential to improve air quality, prevent floods, reduce the heat island effect and restore biodiversity in cities. However, an additional consumption of fresh water and mineral nutrients is necessary to sustain larger green urban areas. This communication discusses the main technical features of a new system to manage urine recovery at the building level and its application in green roofs. The depletion of critical nutrients like phosphorus constitutes an emergency. In turn, their elimination through urine is one of the principal causes for their loss. Thus, urine recovery in buildings may offer numerous advantages, constituting a valuable fertilizer abundantly available in cities and reducing the load on wastewater treatment plants. Although several urine-diverting toilets have been developed for this purpose and some experiments using urine directly in agriculture have already been carried out in Europe, several challenges have emerged with this practice concerning collection, sanitization, storage and application of urine in buildings. To our best knowledge, current buildings are not designed to receive these systems and integrated solutions with ability to self-manage the whole process of urine recovery, including separation, maturation and storage phases, are not known. Additionally, if from a hygiene point of view human urine may be considered a relatively safe fertilizer, the risk of disease transmission needs to be carefully analysed. A reduction in microorganisms can be achieved by storing the urine in closed tanks. However, several factors may affect this process, which may result in a higher survival rate for some pathogens. In this work, urine effluent was collected under real conditions, stored in closed containers and kept in climatic chambers under variable conditions simulating cold, temperate and tropical climates. These samples were subjected to a first physicochemical and microbiological control, which was repeated over time. The results obtained so far suggest that maturation conditions were reached for all the three temperatures and that a storage period of less than three months is required to achieve a strong depletion of microorganisms. The authors are grateful for the Project WashOne (POCI-01-0247-FEDER-017461) funded by the Operational Program for Competitiveness and Internationalization (POCI) of Portugal 2020, with the support of the European Regional Development Fund (FEDER).Keywords: sustainable green roofs and urban gardens, urban nutrient cycle, urine-based fertilizers, urine recovery in buildings
Procedia PDF Downloads 165198 Case Report of Left Atrial Myxoma Diagnosed by Bedside Echocardiography
Authors: Anthony S. Machi, Joseph Minardi
Abstract:
We present a case report of left atrial myxoma diagnosed by bedside transesophageal (TEE) ultrasound. Left atrial myxoma is the most common benign cardiac tumor and can obstruct blood flow and cause valvular insufficiency. Common symptoms consist of dyspnea, pulmonary edema and other features of left heart failure in addition to thrombus release in the form of tumor fragments. The availability of bedside ultrasound equipment is essential for the quick diagnosis and treatment of various emergency conditions including cardiac neoplasms. A 48-year-old Caucasian female with a four-year history of an untreated renal mass and anemia presented to the ED with two months of sharp, intermittent, bilateral flank pain radiating into the abdomen. She also reported intermittent vomiting and constipation along with generalized body aches, night sweats, and 100-pound weight loss over last year. She had a CT in 2013 showing a 3 cm left renal mass and a second CT in April 2016 showing a 3.8 cm left renal mass along with a past medical history of diverticulosis, chronic bronchitis, dyspnea on exertion, uncontrolled hypertension, and hyperlipidemia. Her maternal family history is positive for breast cancer, hypertension, and Type II Diabetes. Her paternal family history is positive for stroke. She was a current everyday smoker with an 11 pack/year history. Alcohol and drug use were denied. Physical exam was notable for a Grade II/IV systolic murmur at the right upper sternal border, dyspnea on exertion without angina, and a tender left lower quadrant. Her vitals and labs were notable for a blood pressure of 144/96, heart rate of 96 beats per minute, pulse oximetry of 96%, hemoglobin of 7.6 g/dL, hypokalemia, hypochloremia, and multiple other abnormalities. Physicians ordered a CT to evaluate her flank pain which revealed a 7.2 x 8.9 x 10.5 cm mixed cystic/solid mass in the lower pole of the left kidney and a filling defect in the left atrium. Bedside TEE was ordered to follow up on the filling defect. TEE reported an ejection fraction of 60-65% and visualized a mobile 6 x 3 cm mass in the left atrium attached to the interatrial septum extending into the mitral valve. Cardiothoracic Surgery and Urology were consulted and confirmed a diagnosis of left atrial myxoma and clear cell renal cell carcinoma. The patient returned a week later due to worsening nausea and vomiting and underwent emergent nephrectomy, lymph node dissection, and colostomy due to a necrotic colon. Her condition declined over the next four months due to lung and brain metastases, infections, and other complications until she passed away.Keywords: bedside ultrasound, echocardiography, emergency medicine, left atrial myxoma
Procedia PDF Downloads 329197 Courtyard Evolution in Contemporary Sustainable Living
Authors: Yiorgos Hadjichristou
Abstract:
The paper will focus on the strategic development deriving from the evolution of the traditional courtyard spatial organization towards a new, contemporary sustainable way of living. New sustainable approaches that engulf the social issues, the notion of place, the understanding of weather architecture blended together with the bioclimatic behaviour will be seen through a series of experimental case studies in the island of Cyprus, inspired and originated from its traditional wisdom, ranging from small scale of living to urban interventions. Weather and nature will be seen as co-architectural authors with architects as intelligently claimed by Jonathan Hill in his Weather Architecture discourse. Furthermore, following Pallasmaa’s understanding, the building will be seen not as an end itself and the elements of an architectural experience as having a verb form rather than being nouns. This will further enhance the notion of merging the subject-human and the object-building as discussed by Julio Bermudez. This eventually will enable to generate the discussion of the understanding of the building constructed according to the specifics of place and inhabitants, shaped by its physical and human topography as referred by Adam Sharr in relation to Heidegger’s thinking. The specificities of the divided island and the dealing with sites that are in vicinity with the diving Green Line will further trigger explorations dealing with the regeneration issues and the social sustainability offering unprecedented opportunities for innovative sustainable ways of living. The above premises will lead us to develop innovative strategies for a profound, both technical and social sustainability, which fruitfully yields to innovative living built environments, responding to the ever changing environmental and social needs. As a starting point, a case study in Kaimakli in Nicosia a refurbishment with an extension of a traditional house, already engulfs all the traditional/ vernacular wisdom of the bioclimatic architecture. It aims at capturing not only its direct and quite obvious bioclimatic features, but rather to evolve them by adjusting the whole house in a contemporary living environment. In order to succeed this, evolutions of traditional architectural elements and spatial conditions are integrated in a way that does not only respond to some certain weather conditions, but they integrate and blend the weather within the built environment. A series of innovations aiming at maximum flexibility is proposed. The house can finally be transformed into a winter enclosure, while for the most part of the year it turns into a ‘camping’ living environment. Parallel to experimental interventions in existing traditional units, we will proceed examining the implementation of the same developed methodology in designing living units and complexes. Malleable courtyard organizations that attempt to blend the traditional wisdom with the contemporary needs for living, the weather and nature with the built environment will be seen tested in both horizontal and vertical developments. A new social identity of people, directly involved and interacting with the weather and climatic conditions will be seen as the result of balancing the social with the technological sustainability, the immaterial and the material aspects of the built environment.Keywords: building as a verb, contemporary living, traditional bioclimatic wisdom, weather architecture
Procedia PDF Downloads 415196 Motivation and Multiglossia: Exploring the Diversity of Interests, Attitudes, and Engagement of Arabic Learners
Authors: Anna-Maria Ramezanzadeh
Abstract:
Demand for Arabic language is growing worldwide, driven by increased interest in the multifarious purposes the language serves, both for the population of heritage learners and those studying Arabic as a foreign language. The diglossic, or indeed multiglossic nature of the language as used in Arabic speaking communities however, is seldom represented in the content of classroom courses. This disjoint between the nature of provision and students’ expectations can severely impact their engagement with course material, and their motivation to either commence or continue learning the language. The nature of motivation and its relationship to multiglossia is sparsely explored in current literature on Arabic. The theoretical framework here proposed aims to address this gap by presenting a model and instruments for the measurement of Arabic learners’ motivation in relation to the multiple strands of the language. It adopts and develops the Second Language Motivation Self-System model (L2MSS), originally proposed by Zoltan Dörnyei, which measures motivation as the desire to reduce the discrepancy between leaners’ current and future self-concepts in terms of the second language (L2). The tripartite structure incorporates measures of the Current L2 Self, Future L2 Self (consisting of an Ideal L2 Self, and an Ought-To Self), and the L2 Learning Experience. The strength of the self-concepts is measured across three different domains of Arabic: Classical, Modern Standard and Colloquial. The focus on learners’ self-concepts allows for an exploration of the effect of multiple factors on motivation towards Arabic, including religion. The relationship between Islam and Arabic is often given as a prominent reason behind some students’ desire to learn the language. Exactly how and why this factor features in learners’ L2 self-concepts has not yet been explored. Specifically designed surveys and interview protocols are proposed to facilitate the exploration of these constructs. The L2 Learning Experience component of the model is operationalized as learners’ task-based engagement. Engagement is conceptualised as multi-dimensional and malleable. In this model, situation-specific measures of cognitive, behavioural, and affective components of engagement are collected via specially designed repeated post-task self-report surveys on Personal Digital Assistant over multiple Arabic lessons. Tasks are categorised according to language learning skill. Given the domain-specific uses of the different varieties of Arabic, the relationship between learners’ engagement with different types of tasks and their overall motivational profiles will be examined to determine the extent of the interaction between the two constructs. A framework for this data analysis is proposed and hypotheses discussed. The unique combination of situation-specific measures of engagement and a person-oriented approach to measuring motivation allows for a macro- and micro-analysis of the interaction between learners and the Arabic learning process. By combining cross-sectional and longitudinal elements with a mixed-methods design, the model proposed offers the potential for capturing a comprehensive and detailed picture of the motivation and engagement of Arabic learners. The application of this framework offers a number of numerous potential pedagogical and research implications which will also be discussed.Keywords: Arabic, diglossia, engagement, motivation, multiglossia, sociolinguistics
Procedia PDF Downloads 165195 Effects of Oxidized LDL in M2 Macrophages: Implications in Atherosclerosis
Authors: Fernanda Gonçalves, Karla Alcântara, Vanessa Moura, Patrícia Nolasco, Jorge Kalil, Maristela Hernandez
Abstract:
Introduction: Atherosclerosis is a chronic disease where two striking features are observed: retention of lipids and inflammation. Understanding the interaction between immune cells and lipoproteins involved in atherogenesis are urgent challenges, since cardiovascular diseases are the leading cause of death worldwide. Macrophages are critical to the development of atherosclerotic plaques and in the perpetuation of inflammation in these lesions. These cells are also directly involved in unstable plaque rupture. Recently different populations of macrophages are being identified in atherosclerotic lesions. Although the presence of M2 macrophages (macrophages activated by the alternative pathway, eg. The IL-4) has been identified, the function of these cells in atherosclerosis is not yet defined. M2 macrophages have a high endocytic capacity, they promote remodeling of tissues and to have anti-inflammatory activity. However, in atherosclerosis, especially unstable plaques, severe inflammatory reaction, accumulation of cellular debris and intense degradation of the tissue is observed. Thus, it is possible that the M2 macrophages have altered function (phenotype) in atherosclerosis. Objective: Our aim is to evaluate if the presence of oxidized LDL alters the phenotype and function of M2 macrophages in vitro. Methods: For this, we will evaluate whether the addition of lipoprotein in M2 macrophages differentiated in vitro with IL -4 induces 1) a reduction in the secretion of anti-inflammatory cytokines (CBA and ELISA), 2) secretion of inflammatory cytokines (CBA and ELISA), 3) expression of cell activation markers (Flow cytometry), 4) alteration in gene expression of molecules adhesion and extracellular matrix (Real-Time PCR) and 5) Matrix degradation (confocal microscopy). Results: In oxLDL stimulated M2 macrophages cultures we did not find any differences in the expression of the cell surface markers tested, including: HLA-DR, CD80, CD86, CD206, CD163 and CD36. Also, cultures stimulated with oxLDL had similar phagocytic capacity when compared to unstimulated cells. However, in the supernatant of these cultures an increase in the secretion of the pro-inflammatory cytokine IL-8 was detected. No significant changes where observed in IL-6, IL-10, IL-12 and IL-1b levels. The culture supernatant also induced massive extracellular matrix (produced by mouse embryo fibroblast) filaments degradation. When evaluating the expression of 84 extracellular matrix and adhesion molecules genes, we observed that the stimulation of oxLDL in M2 macrophages decreased 47% of the genes and increased the expression of only 3% of the genes. In particular we noted that oxLDL inhibit the expression of 60% of the genes constituents of extracellular matrix and collagen expressed by these cells, including fibronectin1 and collagen VI. We also observed a decrease in the expression of matrix protease inhibitors, such as TIMP 2. On the opposite, the matricellular protein thrombospondin had a 12 fold increase in gene expression. In the presence of native LDL 90% of the genes had no altered expression. Conclusion: M2 macrophages stimulated with oxLDL secrete the pro-inflammatory cytokine IL-8, have an altered extracellular matrix constituents gene expression, and promote the degradation of extracellular matrix. M2 macrophages may contribute to the perpetuation of inflammation in atherosclerosis and to plaque rupture.Keywords: atherosclerosis, LDL, macrophages, m2
Procedia PDF Downloads 334194 Measuring Firms’ Patent Management: Conceptualization, Validation, and Interpretation
Authors: Mehari Teshome, Lara Agostini, Anna Nosella
Abstract:
The current knowledge-based economy extends intellectual property rights (IPRs) legal research themes into a more strategic and organizational perspectives. From the diverse types of IPRs, patents are the strongest and well-known form of legal protection that influences commercial success and market value. Indeed, from our pilot survey, we understood that firms are less likely to manage their patents and actively used it as a tool for achieving competitive advantage rather they invest resource and efforts for patent application. To this regard, the literature also confirms that insights into how firms manage their patents from a holistic, strategic perspective, and how the portfolio value of patents can be optimized are scarce. Though patent management is an important business tool and there exist few scales to measure some dimensions of patent management, at the best of our knowledge, no systematic attempt has been made to develop a valid and comprehensive measure of it. Considering this theoretical and practical point of view, the aim of this article is twofold: to develop a framework for patent management encompassing all relevant dimensions with their respective constructs and measurement items, and to validate the measurement using survey data from practitioners. Methodology: We used six-step methodological approach (i.e., specify the domain of construct, item generation, scale purification, internal consistency assessment, scale validation, and replication). Accordingly, we carried out a systematic review of 182 articles on patent management, from ISI Web of Science. For each article, we mapped relevant constructs, their definition, and associated features, as well as items used to measure these constructs, when provided. This theoretical analysis was complemented by interviews with experts in patent management to get feedbacks that are more practical on how patent management is carried out in firms. Afterwards, we carried out a questionnaire survey to purify our scales and statistical validation. Findings: The analysis allowed us to design a framework for patent management, identifying its core dimensions (i.e., generation, portfolio-management, exploitation and enforcement, intelligence) and support dimensions (i.e., strategy and organization). Moreover, we identified the relevant activities for each dimension, as well as the most suitable items to measure them. For example, the core dimension generation includes constructs as: state-of-the-art analysis, freedom-to-operate analysis, patent watching, securing freedom-to-operate, patent potential and patent-geographical-scope. Originality and the Study Contribution: This study represents a first step towards the development of sound scales to measure patent management with an overarching approach, thus laying the basis for developing a recognized landmark within the research area of patent management. Practical Implications: The new scale can be used to assess the level of sophistication of the patent management of a company and compare it with other firms in the industry to evaluate their ability to manage the different activities involved in patent management. In addition, the framework resulting from this analysis can be used as a guide that supports managers to improve patent management in firms.Keywords: patent, management, scale, development, intellectual property rights (IPRs)
Procedia PDF Downloads 144193 Spectral Responses of the Laser Generated Coal Aerosol
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki
Abstract:
Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation
Procedia PDF Downloads 358192 ExactData Smart Tool For Marketing Analysis
Authors: Aleksandra Jonas, Aleksandra Gronowska, Maciej Ścigacz, Szymon Jadczak
Abstract:
Exact Data is a smart tool which helps with meaningful marketing content creation. It helps marketers achieve this by analyzing the text of an advertisement before and after its publication on social media sites like Facebook or Instagram. In our research we focus on four areas of natural language processing (NLP): grammar correction, sentiment analysis, irony detection and advertisement interpretation. Our research has identified a considerable lack of NLP tools for the Polish language, which specifically aid online marketers. In light of this, our research team has set out to create a robust and versatile NLP tool for the Polish language. The primary objective of our research is to develop a tool that can perform a range of language processing tasks in this language, such as sentiment analysis, text classification, text correction and text interpretation. Our team has been working diligently to create a tool that is accurate, reliable, and adaptable to the specific linguistic features of Polish, and that can provide valuable insights for a wide range of marketers needs. In addition to the Polish language version, we are also developing an English version of the tool, which will enable us to expand the reach and impact of our research to a wider audience. Another area of focus in our research involves tackling the challenge of the limited availability of linguistically diverse corpora for non-English languages, which presents a significant barrier in the development of NLP applications. One approach we have been pursuing is the translation of existing English corpora, which would enable us to use the wealth of linguistic resources available in English for other languages. Furthermore, we are looking into other methods, such as gathering language samples from social media platforms. By analyzing the language used in social media posts, we can collect a wide range of data that reflects the unique linguistic characteristics of specific regions and communities, which can then be used to enhance the accuracy and performance of NLP algorithms for non-English languages. In doing so, we hope to broaden the scope and capabilities of NLP applications. Our research focuses on several key NLP techniques including sentiment analysis, text classification, text interpretation and text correction. To ensure that we can achieve the best possible performance for these techniques, we are evaluating and comparing different approaches and strategies for implementing them. We are exploring a range of different methods, including transformers and convolutional neural networks (CNNs), to determine which ones are most effective for different types of NLP tasks. By analyzing the strengths and weaknesses of each approach, we can identify the most effective techniques for specific use cases, and further enhance the performance of our tool. Our research aims to create a tool, which can provide a comprehensive analysis of advertising effectiveness, allowing marketers to identify areas for improvement and optimize their advertising strategies. The results of this study suggest that a smart tool for advertisement analysis can provide valuable insights for businesses seeking to create effective advertising campaigns.Keywords: NLP, AI, IT, language, marketing, analysis
Procedia PDF Downloads 84