Search results for: plane projection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 985

Search results for: plane projection

715 Mathematics Vision of the Companies' Growth with Educational Technologies

Authors: Valencia P. L. Rodrigo, Morita A. Adelina, Vargas V. Martin

Abstract:

This proposal consists of an analysis of macro concepts involved within an organization growth using educational technologies, which will relate each concept, in a mathematical way with a vision of harmonic work. Working collaboratively, competitively and cooperatively so that this growth is harmonious and homogenous, coining a new term, Harmonic Work. The Harmonic Work ensures that the organization grows in all business directions, allowing managers to project a much more accurate growth, making clear the contribution of each department, resulting in an algorithm that analyzes each of the variables both endogenous and exogenous, establishing different performance indicators in its process of growth.

Keywords: business projection, collaboration, competitiveness, educational technology, harmonious growth

Procedia PDF Downloads 294
714 Hope in the Ruins of 'Ozymandias': Reimagining Temporal Horizons in Felicia Hemans 'the Image in Lava'

Authors: Lauren Schuldt Wilson

Abstract:

Felicia Hemans’ memorializing of the unwritten lives of women and the consequent allowance for marginalized voices to remember and be remembered has been considered by many critics in terms of ekphrasis and elegy, terms which privilege the question of whether Hemans’ poeticizing can represent lost voices of history or only her poetic expression. Amy Gates, Brian Elliott, and others point out Hemans’ acknowledgement of the self-projection necessary for imaginatively filling the absences of unrecorded histories. Yet, few have examined the complex temporal positioning Hemans inscribes in these moments of self-projection and imaginative historicizing. In poems like ‘The Image in Lava,’ Hemans maps not only a lost past, but also a lost potential future onto the image of a dead infant in its mother’s arms, the discovery and consideration of which moves the imagined viewer to recover and incorporate the ‘hope’ encapsulated in the figure of the infant into a reevaluation of national time embodied by the ‘relics / Left by the pomps of old.’ By examining Hemans’ acknowledgement and response to Percy Bysshe Shelley’s ‘Ozymandias,’ this essay explores how Hemans’ depictions of imaginative historicizing open new horizons of possibility and reevaluate temporal value structures by imagining previously undiscovered or unexplored potentialities of the past. Where Shelley’s poem mocks the futility of national power and time, this essay outlines Hemans’ suggestion of alternative threads of identity and temporal meaning-making which, regardless of historical veracity, exist outside of and against the structures Shelley challenges. Counter to previous readings of Hemans’ poem as celebration of either recovered or poetically constructed maternal love, this essay argues that Hemans offers a meditation on sites of reproduction—both of personal reproductive futurity and of national reproduction of power. This meditation culminates in Hemans’ gesturing towards a method of historicism by which the imagined viewer reinvigorates the sterile, ‘shattered visage’ of national time by forming temporal identity through the imagining of trans-historical hope inscribed on the infant body of the universal, individual subject rather than the broken monument of the king.

Keywords: futurity, national temporalities, reproduction, revisionary histories

Procedia PDF Downloads 141
713 Influence of Random Fibre Packing on the Compressive Strength of Fibre Reinforced Plastic

Authors: Y. Wang, S. Zhang, X. Chen

Abstract:

The longitudinal compressive strength of fibre reinforced plastic (FRP) possess a large stochastic variability, which limits efficient application of composite structures. This study aims to address how the random fibre packing affects the uncertainty of FRP compressive strength. An novel approach is proposed to generate random fibre packing status by a combination of Latin hypercube sampling and random sequential expansion. 3D nonlinear finite element model is built which incorporates both the matrix plasticity and fibre geometrical instability. The matrix is modeled by isotropic ideal elasto-plastic solid elements, and the fibres are modeled by linear-elastic rebar elements. Composite with a series of different nominal fibre volume fractions are studied. Premature fibre waviness at different magnitude and direction is introduced in the finite element model. Compressive tests on uni-directional CFRP (carbon fibre reinforced plastic) are conducted following the ASTM D6641. By a comparison of 3D FE models and compressive tests, it is clearly shown that the stochastic variation of compressive strength is partly caused by the random fibre packing, and normal or lognormal distribution tends to be a good fit the probabilistic compressive strength. Furthermore, it is also observed that different random fibre packing could trigger two different fibre micro-buckling modes while subjected to longitudinal compression: out-of-plane buckling and twisted buckling. The out-of-plane buckling mode results much larger compressive strength, and this is the major reason why the random fibre packing results a large uncertainty in the FRP compressive strength. This study would contribute to new approaches to the quality control of FRP considering higher compressive strength or lower uncertainty.

Keywords: compressive strength, FRP, micro-buckling, random fibre packing

Procedia PDF Downloads 249
712 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 43
711 Fast Switching Mechanism for Multicasting Failure in OpenFlow Networks

Authors: Alaa Allakany, Koji Okamura

Abstract:

Multicast technology is an efficient and scalable technology for data distribution in order to optimize network resources. However, in the IP network, the responsibility for management of multicast groups is distributed among network routers, which causes some limitations such as delays in processing group events, high bandwidth consumption and redundant tree calculation. Software Defined Networking (SDN) represented by OpenFlow presented as a solution for many problems, in SDN the control plane and data plane are separated by shifting the control and management to a remote centralized controller, and the routers are used as a forwarder only. In this paper we will proposed fast switching mechanism for solving the problem of link failure in multicast tree based on Tabu Search heuristic algorithm and modifying the functions of OpenFlow switch to fasts switch to the pack up sub tree rather than sending to the controller. In this work we will implement multicasting OpenFlow controller, this centralized controller is a core part in our multicasting approach, which is responsible for 1- constructing the multicast tree, 2- handling the multicast group events and multicast state maintenance. And finally modifying OpenFlow switch functions for fasts switch to pack up paths. Forwarders, forward the multicast packet based on multicast routing entries which were generated by the centralized controller. Tabu search will be used as heuristic algorithm for construction near optimum multicast tree and maintain multicast tree to still near optimum in case of join or leave any members from multicast group (group events).

Keywords: multicast tree, software define networks, tabu search, OpenFlow

Procedia PDF Downloads 232
710 Status and Results from EXO-200

Authors: Ryan Maclellan

Abstract:

EXO-200 has provided one of the most sensitive searches for neutrinoless double-beta decay utilizing 175 kg of enriched liquid xenon in an ultra-low background time projection chamber. This detector has demonstrated excellent energy resolution and background rejection capabilities. Using the first two years of data, EXO-200 has set a limit of 1.1x10^25 years at 90% C.L. on the neutrinoless double-beta decay half-life of Xe-136. The experiment has experienced a brief hiatus in data taking during a temporary shutdown of its host facility: the Waste Isolation Pilot Plant. EXO-200 expects to resume data taking in earnest this fall with upgraded detector electronics. Results from the analysis of EXO-200 data and an update on the current status of EXO-200 will be presented.

Keywords: double-beta, Majorana, neutrino, neutrinoless

Procedia PDF Downloads 388
709 Cognitive Translation and Conceptual Wine Tasting Metaphors: A Corpus-Based Research

Authors: Christine Demaecker

Abstract:

Many researchers have underlined the importance of metaphors in specialised language. Their use of specific domains helps us understand the conceptualisations used to communicate new ideas or difficult topics. Within the wide area of specialised discourse, wine tasting is a very specific example because it is almost exclusively metaphoric. Wine tasting metaphors express various conceptualisations. They are not linguistic but rather conceptual, as defined by Lakoff & Johnson. They correspond to the linguistic expression of a mental projection from a well-known or more concrete source domain onto the target domain, which is the taste of wine. But unlike most specialised terminologies, the vocabulary is never clearly defined. When metaphorical terms are listed in dictionaries, their definitions remain vague, unclear, and circular. They cannot be replaced by literal linguistic expressions. This makes it impossible to transfer them into another language with the traditional linguistic translation methods. Qualitative research investigates whether wine tasting metaphors could rather be translated with the cognitive translation process, as well described by Nili Mandelblit (1995). The research is based on a corpus compiled from two high-profile wine guides; the Parker’s Wine Buyer’s Guide and its translation into French and the Guide Hachette des Vins and its translation into English. In this small corpus with a total of 68,826 words, 170 metaphoric expressions have been identified in the original English text and 180 in the original French text. They have been selected with the MIPVU Metaphor Identification Procedure developed at the Vrije Universiteit Amsterdam. The selection demonstrates that both languages use the same set of conceptualisations, which are often combined in wine tasting notes, creating conceptual integrations or blends. The comparison of expressions in the source and target texts also demonstrates the use of the cognitive translation approach. In accordance with the principle of relevance, the translation always uses target language conceptualisations, but compared to the original, the highlighting of the projection is often different. Also, when original metaphors are complex with a combination of conceptualisations, at least one element of the original metaphor underlies the target expression. This approach perfectly integrates into Lederer’s interpretative model of translation (2006). In this triangular model, the transfer of conceptualisation could be included at the level of ‘deverbalisation/reverbalisation’, the crucial stage of the model, where the extraction of meaning combines with the encyclopedic background to generate the target text.

Keywords: cognitive translation, conceptual integration, conceptual metaphor, interpretative model of translation, wine tasting metaphor

Procedia PDF Downloads 105
708 Percentage Contribution of Lower Limb Moments to Vertical Ground Reaction Force in Normal Walking

Authors: Salam M. Elhafez, Ahmed A. Ashour, Naglaa M. Elhafez, Ghada M. Elhafez, Azza M. Abdelmohsen

Abstract:

Patients suffering from gait disturbances are referred by having muscle group dysfunctions. There is a need for more studies investigating the contribution of muscle moments of the lower limb to the vertical ground reaction force using 3D gait analysis system. The purpose of this study was to investigate how the hip, knee and ankle moments in the sagittal plane contribute to the vertical ground reaction force in healthy subjects during normal speed of walking. Forty healthy male individuals volunteered to participate in this study. They were filmed using six high speed (120 Hz) Pro-Reflex Infrared cameras (Qualisys) while walking on an AMTI force platform. The data collected were the percentage contribution of the moments of the hip, knee and ankle joints in the sagittal plane at the instant of occurrence of the first peak, second peak, and the trough of the vertical ground reaction force. The results revealed that at the first peak of the ground reaction force (loading response), the highest contribution was generated from the knee extension moment, followed by the hip extension moment. Knee flexion and ankle plantar flexion moments produced high contribution to the trough of the ground reaction force (midstance) with approximately equal values. The second peak of the ground reaction force was mainly produced by the ankle plantar flexion moment. Conclusion: Hip and knee flexion and extension moments and ankle plantar flexion moment play important roles in the supporting phase of normal walking.

Keywords: gait analysis, ground reaction force, moment contribution, normal walking

Procedia PDF Downloads 350
707 Basins of Attraction for Quartic-Order Methods

Authors: Young Hee Geum

Abstract:

We compare optimal quartic order method for the multiple zeros of nonlinear equations illustrating the basins of attraction. To construct basins of attraction effectively, we take a 600×600 uniform grid points at the origin of the complex plane and paint the initial values on the basins of attraction with different colors according to the iteration number required for convergence.

Keywords: basins of attraction, convergence, multiple-root, nonlinear equation

Procedia PDF Downloads 234
706 Classification on Statistical Distributions of a Complex N-Body System

Authors: David C. Ni

Abstract:

Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.

Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification

Procedia PDF Downloads 279
705 Beyond Baudrillard: A Critical Intersection between Semiotics and Materialism

Authors: Francesco Piluso

Abstract:

Nowadays, to restore the deconstructive power of semiotics implies a critical analysis of neoliberal ideology, and, even more critically, a confrontation with materialist perspective. The theoretical path of Jean Baudrillard is crucial to understand the ambivalence of this intersection. A semiotic critique of Baudrillard’s work, through tools of both structuralism and interpretative semiotics, has the aim to give materialism a new consistent semiotic approach and vice-versa. According to Baudrillard, the commodity form is characterized by the same abstract and systemic logic of the sign-form, in which the production of the signified (use-value) is a mere ideological mean for the reproduction of the signifiers-chain (exchange-value). Nevertheless, this parallelism is broken by the author himself: if the use-value is deconstructed in its relative logic, the signified and the referent, both as discrete and positive elements, are collapsed on the same plane at the shadows of the signified forms. These divergent considerations lead Baudrillard to the same crucial point: the dismissal of the material world, replaced by the hyperreality as reproduction of a semiotic (genetic) Code. The stress on the concept of form, as an epistemological and semiotic tool to analyse the construction of values in the consumer society, has led to the Code as its ontological drift. In other words, Baudrillard seems to enclose consumer society (and reality) in this immanent and self-fetishized world of signs–an ideological perspective that mystifies the gravity of the material relationships between Northern-Western World and Third World. The notion of Encyclopaedia by Umberto Eco is the key to overturn the relationship of immanence/transcendence between the Code and the economic political of the sign, by understanding the former as an ideological plane within the encyclopedia itself. Therefore, rather than building semiotic (hyper)realities, semiotics has to deal with materialism in terms of material relationships of power which are mystified and reproduced through such ideological ontologies of signs.

Keywords: Baudrillard, Code, Eco, Encyclopaedia, epistemology vs. ontology, semiotics vs. materialism

Procedia PDF Downloads 135
704 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm

Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi

Abstract:

To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.

Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm

Procedia PDF Downloads 203
703 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis

Authors: Kevin Potoczny, Katsuichiro Goda

Abstract:

The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.

Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility

Procedia PDF Downloads 49
702 Software-Defined Networking: A New Approach to Fifth Generation Networks: Security Issues and Challenges Ahead

Authors: Behrooz Daneshmand

Abstract:

Software Defined Networking (SDN) is designed to meet the future needs of 5G mobile networks. The SDN architecture offers a new solution that involves separating the control plane from the data plane, which is usually paired together. Network functions traditionally performed on specific hardware can now be abstracted and virtualized on any device, and a centralized software-based administration approach is based on a central controller, facilitating the development of modern applications and services. These plan standards clear the way for a more adaptable, speedier, and more energetic network beneath computer program control compared with a conventional network. We accept SDN gives modern inquire about openings to security, and it can significantly affect network security research in numerous diverse ways. Subsequently, the SDN architecture engages systems to effectively screen activity and analyze threats to facilitate security approach modification and security benefit insertion. The segregation of the data planes and control and, be that as it may, opens security challenges, such as man-in-the-middle attacks (MIMA), denial of service (DoS) attacks, and immersion attacks. In this paper, we analyze security threats to each layer of SDN - application layer - southbound interfaces/northbound interfaces - controller layer and data layer. From a security point of see, the components that make up the SDN architecture have a few vulnerabilities, which may be abused by aggressors to perform noxious activities and hence influence the network and its administrations. Software-defined network assaults are shockingly a reality these days. In a nutshell, this paper highlights architectural weaknesses and develops attack vectors at each layer, which leads to conclusions about further progress in identifying the consequences of attacks and proposing mitigation strategies.

Keywords: software-defined networking, security, SDN, 5G/IMT-2020

Procedia PDF Downloads 66
701 The Effect of Aging of ZnO, AZO, and GZO films on the Microstructure and Photoelectric Property

Authors: Zue-Chin Chang

Abstract:

RF magnetron sputtering is used on the ceramic targets, each of which contains zinc oxide (ZnO), zinc oxide doped with aluminum (AZO) and zinc oxide doped with gallium (GZO). The XRD analysis showed a preferred orientation along the (002) plane for ZnO, AZO, and GZO films. The AZO film had the best electrical properties; it had the lowest resistivity of 6.6 × 10-4 cm, the best sheet resistance of 2.2 × 10-1 Ω/square, and the highest carrier concentration of 4.3 × 1020 cm-3, as compared to the ZnO and GZO films.

Keywords: aging, films, microstructure, photoelectric property

Procedia PDF Downloads 441
700 Mapping of Electrical Energy Consumption Yogyakarta Province in 2014-2025

Authors: Alfi Al Fahreizy

Abstract:

Yogyakarta is one of the provinces in Indonesia that often get a power outage because of high load electrical consumption. The authors mapped the electrical energy consumption [GWh] for the province of Yogyakarta in 2014-2025 using LEAP (Long-range Energy Alternatives Planning system) software. This paper use BAU (Business As Usual) scenario. BAU scenario in which the projection is based on the assumption that growth in electricity consumption will run as normally as before. The goal is to be able to see the electrical energy consumption in the household sector, industry , business, social, government office building, and street lighting. The data is the data projected statistical population and consumption data electricity [GWh] 2010, 2011, 2012 in Yogyakarta province.

Keywords: LEAP, energy consumption, Yogyakarta, BAU

Procedia PDF Downloads 568
699 Simulation Modelling of the Transmission of Concentrated Solar Radiation through Optical Fibres to Thermal Application

Authors: M. Rahou, A. J. Andrews, G. Rosengarten

Abstract:

One of the main challenges in high-temperature solar thermal applications transfer concentrated solar radiation to the load with minimum energy loss and maximum overall efficiency. The use of a solar concentrator in conjunction with bundled optical fibres has potential advantages in terms of transmission energy efficiency, technical feasibility and cost-effectiveness compared to a conventional heat transfer system employing heat exchangers and a heat transfer fluid. In this paper, a theoretical and computer simulation method is described to estimate the net solar radiation transmission from a solar concentrator into and through optical fibres to a thermal application at the end of the fibres over distances of up to 100 m. A key input to the simulation is the angular distribution of radiation intensity at each point across the aperture plane of the optical fibre. This distribution depends on the optical properties of the solar concentrator, in this case, a parabolic mirror with a small secondary mirror with a common focal point and a point-focus Fresnel lens to give a collimated beam that pass into the optical fibre bundle. Since solar radiation comprises a broad band of wavelengths with very limited spatial coherence over the full range of spectrum only ray tracing models absorption within the fibre and reflections at the interface between core and cladding is employed, assuming no interference between rays. The intensity of the radiation across the exit plane of the fibre is found by integrating across all directions and wavelengths. Results of applying the simulation model to a parabolic concentrator and point-focus Fresnel lens with typical optical fibre bundle will be reported, to show how the energy transmission varies with the length of fibre.

Keywords: concentrated radiation, fibre bundle, parabolic dish, fresnel lens, transmission

Procedia PDF Downloads 540
698 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 84
697 Correlation between Cephalometric Measurements and Visual Perception of Facial Profile in Skeletal Type II Patients

Authors: Choki, Supatchai Boonpratham, Suwannee Luppanapornlarp

Abstract:

The objective of this study was to find a correlation between cephalometric measurements and visual perception of facial profile in skeletal type II patients. In this study, 250 lateral cephalograms of female patients from age, 20 to 22 years were analyzed. The profile outlines of all the samples were hand traced and transformed into silhouettes by the principal investigator. Profile ratings were done by 9 orthodontists on Visual Analogue Scale from score one to ten (increasing level of convexity). 37 hard issue and soft tissue cephalometric measurements were analyzed by the principal investigator. All the measurements were repeated after 2 weeks interval for error assessment. At last, the rankings of visual perceptions were correlated with cephalometric measurements using Spearman correlation coefficient (P < 0.05). The results show that the increase in facial convexity was correlated with higher values of ANB (A point, nasion and B point), AF-BF (distance from A point to B point in mm), L1-NB (distance from lower incisor to NB line in mm), anterior maxillary alveolar height, posterior maxillary alveolar height, overjet, H angle hard tissue, H angle soft tissue and lower lip to E plane (absolute correlation values from 0.277 to 0.711). In contrast, the increase in facial convexity was correlated with lower values of Pg. to N perpendicular and Pg. to NB (mm) (absolute correlation value -0.302 and -0.294 respectively). From the soft tissue measurements, H angles had a higher correlation with visual perception than facial contour angle, nasolabial angle, and lower lip to E plane. In conclusion, the findings of this study indicated that the correlation of cephalometric measurements with visual perception was less than expected. Only 29% of cephalometric measurements had a significant correlation with visual perception. Therefore, diagnosis based solely on cephalometric analysis can result in failure to meet the patient’s esthetic expectation.

Keywords: cephalometric measurements, facial profile, skeletal type II, visual perception

Procedia PDF Downloads 116
696 Comparing Community Detection Algorithms in Bipartite Networks

Authors: Ehsan Khademi, Mahdi Jalili

Abstract:

Despite the special features of bipartite networks, they are common in many systems. Real-world bipartite networks may show community structure, similar to what one can find in one-mode networks. However, the interpretation of the community structure in bipartite networks is different as compared to one-mode networks. In this manuscript, we compare a number of available methods that are frequently used to discover community structure of bipartite networks. These networks are categorized into two broad classes. One class is the methods that, first, transfer the network into a one-mode network, and then apply community detection algorithms. The other class is the algorithms that have been developed specifically for bipartite networks. These algorithms are applied on a model network with prescribed community structure.

Keywords: community detection, bipartite networks, co-clustering, modularity, network projection, complex networks

Procedia PDF Downloads 589
695 Implementation of a Low-Cost Driver Drowsiness Evaluation System Using a Thermal Camera

Authors: Isa Moazen, Ali Nahvi

Abstract:

Driver drowsiness is a major cause of vehicle accidents, and facial images are highly valuable to detect drowsiness. In this paper, we perform our research via a thermal camera to record drivers' facial images on a driving simulator. A robust real-time algorithm extracts the features using horizontal and vertical integration projection, contours, contour orientations, and cropping tools. The features are included four target areas on the cheeks and forehead. Qt compiler and OpenCV are used with two cameras with different resolutions. A high-resolution thermal camera is used for fifteen subjects, and a low-resolution one is used for a person. The results are investigated by four temperature plots and evaluated by observer rating of drowsiness.

Keywords: advanced driver assistance systems, thermal imaging, driver drowsiness detection, feature extraction

Procedia PDF Downloads 104
694 Enthalpies of Formation of Equiatomic Binary Hafnium Transition Metal Compounds HfM (M=Co, Ir, Os, Pt, Rh, Ru)

Authors: Hadda Krarcha, S. Messaasdi

Abstract:

In order to investigate Hafnium transition metal alloys HfM (M= Co, Ir, Os,Pt, Rh, Ru) phase diagrams in the region of 50/50% atomic ratio, we performed ab initio Full-Potential Linearized Augmented Plane Waves calculations of the enthalpies of formation of HfM compounds at B2 (CsCl) structure type. The obtained enthalpies of formation are discussed and compared to some of the existing models and available experimental data.

Keywords: enthalpy of formation, transition metal, binarry compunds, hafnium

Procedia PDF Downloads 454
693 Finite Element Modelling for the Development of a Planar Ultrasonic Dental Scaler for Prophylactic and Periodontal Care

Authors: Martin Hofmann, Diego Stutzer, Thomas Niederhauser, Juergen Burger

Abstract:

Dental biofilm is the main etiologic factor for caries, periodontal and peri-implant infections. In addition to the risk of tooth loss, periodontitis is also associated with an increased risk of systemic diseases such as atherosclerotic cardiovascular disease and diabetes. For this reason, dental hygienists use ultrasonic scalers for prophylactic and periodontal care of the teeth. However, the current instruments are limited to their dimensions and operating frequencies. The innovative design of a planar ultrasonic transducer introduces a new type of dental scalers. The flat titanium-based design allows the mass to be significantly reduced compared to a conventional screw-mounted Langevin transducer, resulting in a more efficient and controllable scaler. For the development of the novel device, multi-physics finite element analysis was used to simulate and optimise various design concepts. This process was supported by prototyping and electromechanical characterisation. The feasibility and potential of a planar ultrasonic transducer have already been confirmed by our current prototypes, which achieve higher performance compared to commercial devices. Operating at the desired resonance frequency of 28 kHz with a driving voltage of 40 Vrms results in an in-plane tip oscillation with a displacement amplitude of up to 75 μm by having less than 8 % out-of-plane movement and an energy transformation factor of 1.07 μm/mA. In a further step, we will adapt the design to two additional resonance frequencies (20 and 40 kHz) to obtain information about the most suitable mode of operation. In addition to the already integrated characterization methods, we will evaluate the clinical efficiency of the different devices in an in vitro setup with an artificial biofilm pocket model.

Keywords: ultrasonic instrumentation, ultrasonic scaling, piezoelectric transducer, finite element simulation, dental biofilm, dental calculus

Procedia PDF Downloads 86
692 A Novel Algorithm for Production Scheduling

Authors: Ali Mohammadi Bolban Abad, Fariborz Ahmadi

Abstract:

Optimization in manufacture is a method to use limited resources to obtain the best performance and reduce waste. In this paper a new algorithm based on eurygaster life is introduced to obtain a plane in which task order and completion time of resources are defined. Evaluation results show our approach has less make span when the resources are allocated with some products in comparison to genetic algorithm.

Keywords: evolutionary computation, genetic algorithm, particle swarm optimization, NP-Hard problems, production scheduling

Procedia PDF Downloads 353
691 The Next Generation Neutrinoless Double-Beta Decay Experiment nEXO

Authors: Ryan Maclellan

Abstract:

The nEXO Collaboration is designing a very large detector for neutrinoless double beta decay of Xe-136. The nEXO detector is rooted in the current EXO-200 program, which has reached a sensitivity for the half-life of the decay of 1.9x10^25 years with an exposure of 99.8 kg-y. The baseline nEXO design assumes 5 tonnes of liquid xenon, enriched in the mass 136 isotope, within a time projection chamber. The detector is being designed to reach a half-life sensitivity of > 5x10^27 years covering the inverted neutrino mass hierarchy, with 5 years of data. We present the nEXO detector design, the current status of R&D efforts, and the physics case for the experiment.

Keywords: double-beta, Majorana, neutrino, neutrinoless

Procedia PDF Downloads 397
690 Solar Energy for Decontamination of Ricinus communis

Authors: Elmo Thiago Lins Cöuras Ford, Valentina Alessandra Carvalho do Vale

Abstract:

The solar energy was used as a source of heating in Ricinus communis pie with the objective of eliminating or minimizing the percentage of the poison in it, so that it can be used as animal feed. A solar cylinder and plane collector were used as heating system. In the focal area of the solar concentrator a gutter support endowed with stove effect was placed. Parameters that denote the efficiency of the systems for the proposed objective was analyzed.

Keywords: solar energy, concentrate, Ricinus communis, temperature

Procedia PDF Downloads 401
689 A New Spell-Out Mechanism

Authors: Yusra Yahya

Abstract:

In this paper, a new spell-out mechanism is developed and defended. This mechanism builds on the role of phase heads as both the loci of spell-out features and the transfer triggers via either Phase Impenetrability Condition 1 (PIC1) and/or Phase Impenetrability Condition 2 (PIC2). The assumption here is that phase heads, mainly v*, can regulate the spell-out process by deciding both the type of spell-out applying and the timing of spell-out relevant. This paper also proposes a new form of the constraint Wrap call it Wrap-XP’ and it is assumed to apply to IP as a functional maximal projection. This extension is shown to fall as a natural result once we assume the new theory of phases and multiple spell-out. Moreover, it is proposed in this work that some forms of XP movement are not motivated by an EPP feature of a strong phase head mainly v*, but they are rather motivated by a last resort strategy to accomplish the spell-out instruction of this phase head.

Keywords: linguistics, syntax, phonology, phase theory, optimality theory

Procedia PDF Downloads 488
688 Thermomechanical Deformation Response in Cold Sprayed SiCp/Al Composites: Strengthening, Microstructure Characterization, and Thermomechanical Properties

Authors: L. Gyansah, Yanfang Shen, Jiqiang Wang, Tianying Xiong

Abstract:

SiCₚ/ pure Al composites with different SiC fractions (20 wt %, 30 wt %, and 40 wt %) were precisely cold sprayed, followed by hot axial-compression tests at deformation temperatures of 473 K to 673 K, leading to failure of specimens through routine crack propagation in their multiphase. The plastic deformation behaviour with respect to the SiCₚ contents and the deformation temperatures were studied at strain rate 1s-1.As-sprayed and post-failure specimens were analyzed by X-ray computed tomography (XCT), transmission electron microscopy (TEM), and scanning electron microscopy (SEM). Quasi-static thermomechanical testing results revealed that compressive strength (UTS = 228 MPa and 30.4 %) was the highest in the composites that was thermomechanically compressed at 473 K compared to those of the as-sprayed, while the as-sprayed exhibited a compressive strength of 182.8 MPa related to the increment in SiC fraction. Strength—plasticity synergy was promoted by dynamic recrystallization (DRX) through strengthening and refinement of the grains. The DRX degree depends relevantly on retainment of the uniformly ultrafine SiCₚ particulates, the pinning effects of the interfaces promoted by the ultrafine grain structures (UFG), and the higher deformation temperature. Reconstructed X-ray computed tomography data revealed different crack propagation mechanisms. A single-plane shear crack with multi-laminates fracture morphology yields relatively through the as-sprayed and as-deformed at 473 K deposits, while a multiphase plane shear cracks preeminently existed in high temperature deformed deposits resulting in multiphase-interface delaminations. Three pertinent strengthening mechanisms, videlicet, SiCp dispersed strengthening, refined grain strengthening, and dislocation strengthening, existed in the gradient microstructure, and their detailed contributions to the thermomechanical properties were discussed.

Keywords: cold spraying, hot deformation, deformation temperature, thermomechancal properties, SiC/Al composite

Procedia PDF Downloads 72
687 Deep Learning for Image Correction in Sparse-View Computed Tomography

Authors: Shubham Gogri, Lucia Florescu

Abstract:

Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.

Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net

Procedia PDF Downloads 120
686 An Atomistic Approach to Define Continuum Mechanical Quantities in One Dimensional Nanostructures at Finite Temperature

Authors: Smriti, Ajeet Kumar

Abstract:

We present a variant of the Irving-Kirkwood procedure to obtain the microscopic expressions of the cross-section averaged continuum fields such as internal force and moment in one-dimensional nanostructures in the non-equilibrium setting. In one-dimensional continuum theories for slender bodies, we deal with quantities such as mass, linear momentum, angular momentum, and strain energy densities, all defined per unit length. These quantities are obtained by integrating the corresponding pointwise (per unit volume) quantities over the cross-section of the slender body. However, no well-defined cross-section exists for these nanostructures at finite temperature. We thus define the cross-section of a nanorod to be an infinite plane which is fixed in space even when time progresses and defines the above continuum quantities by integrating the pointwise microscopic quantities over this infinite plane. The method yields explicit expressions of both the potential and kinetic parts of the above quantities. We further specialize in these expressions for helically repeating one-dimensional nanostructures in order to use them in molecular dynamics study of extension, torsion, and bending of such nanostructures. As, the Irving-Kirkwood procedure does not yield expressions of stiffnesses, we resort to a thermodynamic equilibrium approach to obtain the expressions of axial force, twisting moment, bending moment, and the associated stiffnesses by taking the first and second derivatives of the Helmholtz free energy with respect to conjugate strain measures. The equilibrium approach yields expressions independent of kinetic terms. We then establish the equivalence of the expressions obtained using the two approaches. The derived expressions are used to understand the extension, torsion, and bending of single-walled carbon nanotubes at non-zero temperatures.

Keywords: thermoelasticity, molecular dynamics, one dimensional nanostructures, nanotube buckling

Procedia PDF Downloads 107