Search results for: extended labelled dependency graph
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2121

Search results for: extended labelled dependency graph

1911 Programmed Speech to Text Summarization Using Graph-Based Algorithm

Authors: Hamsini Pulugurtha, P. V. S. L. Jagadamba

Abstract:

Programmed Speech to Text and Text Summarization Using Graph-based Algorithms can be utilized in gatherings to get the short depiction of the gathering for future reference. This gives signature check utilizing Siamese neural organization to confirm the personality of the client and convert the client gave sound record which is in English into English text utilizing the discourse acknowledgment bundle given in python. At times just the outline of the gathering is required, the answer for this text rundown. Thus, the record is then summed up utilizing the regular language preparing approaches, for example, solo extractive text outline calculations

Keywords: Siamese neural network, English speech, English text, natural language processing, unsupervised extractive text summarization

Procedia PDF Downloads 218
1910 Return to Bowel Function after Right versus Extended Right Hemicolectomy: A Retrospective Review

Authors: Zak Maas, Daniel Carson, Rachel McIntyre, Mark Omundsen, Teresa Holm

Abstract:

Aim: After hemicolectomy a period of obligatory bowel dysfunction is expected, termed postoperative ileus (POI). Prolonged postoperative ileus (PPOI), typically four or more days, is associated with higher morbidity and extended inpatient stay. This leads to significant financial and resource-related burdens on healthcare systems. Several studies including a meta-analysis have compared rates of PPOI in left vs right hemicolectomy, which suggest that right-sided resections may be more likely to result in PPOI. Our study aims to further investigate whether significant differences in PPOI and obligatory POI exist between right versus extended right hemicolectomy. Methods: This is a retrospective review assessing rates of PPOI in patients who underwent right vs extended right hemicolectomy at Tauranga Hospital. Patients were divided and compared depending on approach (open versus laparoscopic) and acuity (acute versus elective). Exclusion criteria included synchronous major operations and patients preoperatively on parenteral nutrition. Primary outcome was PPOI as pre-defined in contemporary literature. Secondary outcomes were time to passage of flatus, passage of stool, toleration of oral diet and rate of complications. Results: There were 669 patients identified for analysis (507 laparoscopic vs 162 open; 194 acute vs 475 elective). Early analysis indicates rates of PPOI was significantly increased in patients undergoing extended right hemicolectomy. Factors including age, gender, ethnicity, preoperative haemaglobin, preoperative albumin and diagnosis of inflammatory bowel disease were examined by multivariate analysis to determine correlation with PPOI. Conclusion: PPOI is a common complication of hemicolectomy surgery. Higher rates of PPOI in extended right vs right hemicolectomy warrants further research into determining the cause. This study examines some other factors which may contribute to PPOI.

Keywords: hemicolectomy, colorectal, complications, postoperative ileus

Procedia PDF Downloads 88
1909 LLM-Powered User-Centric Knowledge Graphs for Unified Enterprise Intelligence

Authors: Rajeev Kumar, Harishankar Kumar

Abstract:

Fragmented data silos within enterprises impede the extraction of meaningful insights and hinder efficiency in tasks such as product development, client understanding, and meeting preparation. To address this, we propose a system-agnostic framework that leverages large language models (LLMs) to unify diverse data sources into a cohesive, user-centered knowledge graph. By automating entity extraction, relationship inference, and semantic enrichment, the framework maps interactions, behaviors, and data around the user, enabling intelligent querying and reasoning across various data types, including emails, calendars, chats, documents, and logs. Its domain adaptability supports applications in contextual search, task prioritization, expertise identification, and personalized recommendations, all rooted in user-centric insights. Experimental results demonstrate its effectiveness in generating actionable insights, enhancing workflows such as trip planning, meeting preparation, and daily task management. This work advances the integration of knowledge graphs and LLMs, bridging the gap between fragmented data systems and intelligent, unified enterprise solutions focused on user interactions.

Keywords: knowledge graph, entity extraction, relation extraction, LLM, activity graph, enterprise intelligence

Procedia PDF Downloads 2
1908 A Phenomenological Study of Sports for the Analysis of Soccer Game: On Embodiment of the Goal Type Ball Games of Team Sports

Authors: K. Kiniwa, S. Kitagawa, M. Kawamoto, H. Uchiyama

Abstract:

This study aims to identify phenomenologically the embodiment of soccer in order to analyze soccer games. In this paper the authors focused on the embodiment of sports and the embodiment of the goal type ball games of team sports. The authors revealed that the embodiment of sports is represented by inverse proportional body. It is possible for this structure (body scheme) of intercorporeality of sports to be compared to the symbolic figure of Uroboros which is a monster connected the tails of two snakes. The embodiment of the goal type ball games of team sports has dependency on situation and complexity. In doing this, it revealed that soccer is sensitive and emotional sports.

Keywords: intercorporeality, structure, body scheme, Uroboros, inverse proportional body, dependency on situation, complexity

Procedia PDF Downloads 302
1907 Ultraviolet Visible Spectroscopy Analysis on Transformer Oil by Correlating It with Various Oil Parameters

Authors: Rajnish Shrivastava, Y. R. Sood, Priti Pundir, Rahul Srivastava

Abstract:

Power transformer is one of the most important devices that are used in power station. Due to several fault impending upon it or due to ageing, etc its life gets lowered. So, it becomes necessary to have diagnosis of oil for fault analysis. Due to the chemical, electrical, thermal and mechanical stress the insulating material in the power transformer degraded. It is important to regularly assess the condition of oil and the remaining life of the power transformer. In this paper UV-VIS absorption graph area is correlated with moisture content, Flash point, IFT and Density of Transformer oil. Since UV-VIS absorption graph area varies accordingly with the variation in different transformer parameters. So by obtaining the correlation among different oil parameters for oil with respect to UV-VIS absorption area, decay contents of transformer oil can be predicted

Keywords: breakdown voltage (BDV), interfacial Tension (IFT), moisture content, ultra violet-visible rays spectroscopy (UV-VIS)

Procedia PDF Downloads 642
1906 Application of Supervised Deep Learning-based Machine Learning to Manage Smart Homes

Authors: Ahmed Al-Adaileh

Abstract:

Renewable energy sources, domestic storage systems, controllable loads and machine learning technologies will be key components of future smart homes management systems. An energy management scheme that uses a Deep Learning (DL) approach to support the smart home management systems, which consist of a standalone photovoltaic system, storage unit, heating ventilation air-conditioning system and a set of conventional and smart appliances, is presented. The objective of the proposed scheme is to apply DL-based machine learning to predict various running parameters within a smart home's environment to achieve maximum comfort levels for occupants, reduced electricity bills, and less dependency on the public grid. The problem is using Reinforcement learning, where decisions are taken based on applying the Continuous-time Markov Decision Process. The main contribution of this research is the proposed framework that applies DL to enhance the system's supervised dataset to offer unlimited chances to effectively support smart home systems. A case study involving a set of conventional and smart appliances with dedicated processing units in an inhabited building can demonstrate the validity of the proposed framework. A visualization graph can show "before" and "after" results.

Keywords: smart homes systems, machine learning, deep learning, Markov Decision Process

Procedia PDF Downloads 202
1905 Data and Model-based Metamodels for Prediction of Performance of Extended Hollo-Bolt Connections

Authors: M. Cabrera, W. Tizani, J. Ninic, F. Wang

Abstract:

Open section beam to concrete-filled tubular column structures has been increasingly utilized in construction over the past few decades due to their enhanced structural performance, as well as economic and architectural advantages. However, the use of this configuration in construction is limited due to the difficulties in connecting the structural members as there is no access to the inner part of the tube to install standard bolts. Blind-bolted systems are a relatively new approach to overcome this limitation as they only require access to one side of the tubular section to tighten the bolt. The performance of these connections in concrete-filled steel tubular sections remains uncharacterized due to the complex interactions between concrete, bolt, and steel section. Over the last years, research in structural performance has moved to a more sophisticated and efficient approach consisting of machine learning algorithms to generate metamodels. This method reduces the need for developing complex, and computationally expensive finite element models, optimizing the search for desirable design variables. Metamodels generated by a data fusion approach use numerical and experimental results by combining multiple models to capture the dependency between the simulation design variables and connection performance, learning the relations between different design parameters and predicting a given output. Fully characterizing this connection will transform high-rise and multistorey construction by means of the introduction of design guidance for moment-resisting blind-bolted connections, which is currently unavailable. This paper presents a review of the steps taken to develop metamodels generated by means of artificial neural network algorithms which predict the connection stress and stiffness based on the design parameters when using Extended Hollo-Bolt blind bolts. It also provides consideration of the failure modes and mechanisms that contribute to the deformability as well as the feasibility of achieving blind-bolted rigid connections when using the blind fastener.

Keywords: blind-bolted connections, concrete-filled tubular structures, finite element analysis, metamodeling

Procedia PDF Downloads 158
1904 Individualism/Collectivism and Extended Theory of Planned Behavior

Authors: Ela Ari, Aysi̇ma Findikoglu

Abstract:

Consumers’ switching GSM operators’ has been an important research issue since the rise of their competitive offers. Recent research has looked at consumer switching behavior through the theory of planned behavior, but not yet extended the theory with identity, psycho-social and cultural influences within the service context. This research explores an extended version of the theory of planned behavior including social and financial risks and brand loyalty. Moreover, the role of individualism and collectivism at the individual level is investigated in a collectivistic culture that moves toward to individualism due to changing family relationships, use of technology and education. Our preliminary analysis showed that financial risk and vertical individualism prove to be a significant determinant of intention to switch. The study also investigates social risk and intention, subjective norm, perceived behavioral control relationship. The effect of individualism and collectivism and attitudes relationship has been also examined within a service industry. Implications for marketing managers and scholars are also discussed.

Keywords: attitude, individualism, intention, subjective norm

Procedia PDF Downloads 458
1903 Realization Mode and Theory for Extensible Music Cognition Education: Taking Children's Music Education as an Example

Authors: Yumeng He

Abstract:

The purpose of this paper is to establish the “extenics” of children music education, the “extenics” thought and methods are introduced into the children music education field. Discussions are made from the perspective of children music education on how to generate new music cognitive from music cognitive, how to generate new music education from music education and how to generate music learning from music learning. The research methods including the extensibility of music art, extensibility of music education, extensibility of music capability and extensibility of music learning. Results of this study indicate that the thought and research methods of children’s extended music education not only have developed the “extenics” concept and ideological methods, meanwhile, the brand-new thought and innovative research perspective have been employed in discussing the children music education. As indicated in research, the children’s extended music education has extended the horizon of children music education, and has endowed the children music education field with a new thought and research method.

Keywords: comprehensive evaluations, extension thought, extension cognition music education, extensibility

Procedia PDF Downloads 225
1902 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs

Authors: Taysir Soliman

Abstract:

One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.

Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph

Procedia PDF Downloads 189
1901 Memetic Algorithm for Solving the One-To-One Shortest Path Problem

Authors: Omar Dib, Alexandre Caminada, Marie-Ange Manier

Abstract:

The purpose of this study is to introduce a novel approach to solve the one-to-one shortest path problem. A directed connected graph is assumed in which all edges’ weights are positive. Our method is based on a memetic algorithm in which we combine a genetic algorithm (GA) and a variable neighborhood search method (VNS). We compare our approximate method with two exact algorithms Dijkstra and Integer Programming (IP). We made experimentations using random generated, complete and real graph instances. In most case studies, numerical results show that our method outperforms exact methods with 5% average gap to the optimality. Our algorithm’s average speed is 20-times faster than Dijkstra and more than 1000-times compared to IP. The details of the experimental results are also discussed and presented in the paper.

Keywords: shortest path problem, Dijkstra’s algorithm, integer programming, memetic algorithm

Procedia PDF Downloads 467
1900 Biodistribution of Fluorescence-Labelled Epidermal Growth Factor Protein from Slow Release Nanozolid Depots in Mouse

Authors: Stefan Gruden, Charlott Brunmark, Bo Holmqvist, Erwin D. Brenndorfer, Martin Johansson, Jian Liu, Ying Zhao, Niklas Axen, Moustapha Hassan

Abstract:

Aim: The study was designed to evaluate the ability of the calcium sulfate-based NanoZolid® drug delivery technology to locally release the epidermal growth factor (EGF) protein while maintaining its biological activity. Methods: NanoZolid-formulated EGF protein labelled with a near-infrared dye (EGF-NIR) depots or EGF-NIR dissolved in PBS were injected subcutaneously into mice bearing EGF receptor (EGFR) positive human A549 lung cancer tumors inoculated subcutaneously. The release and biodistribution of the EGF-NIR were investigated in vivo longitudinally up to 96 hours post-administration, utilizing whole-body fluorescence imaging. In order to confirm the in vivo findings, histological analysis of tumor cryosections was performed to investigate EGF-NIR fluorescent signal and EGFR expression level by immunofluorescence labelling. Results: The in vivo fluorescence imaging showed a controlled release profile of the EGF-NIR loaded in the NanoZolid depots compared to free EGF-NIR. Histological analysis of the tumors further demonstrated a prevailing distribution of EGF-NIR in regions with high levels of EGFR expression. Conclusion: Calcium sulfate based depots can be used to formulate EGF while maintaining its biological activity, e.g., receptor binding capability. This may have good clinical potential for local delivery of biomolecules to enhance treatment efficacy and minimize systemic adverse effects.

Keywords: bioresorbable, calcium sulfate, controlled release, NanoZolid

Procedia PDF Downloads 166
1899 The Need for Automation in the Domestic Food Processing Sector and its Impact

Authors: Shantam Gupta

Abstract:

The objective of this study is to address the critical need for automation in the domestic food processing sector and study its impact. Food is the one of the most basic physiological needs essential for the survival of a living being. Some of them have the capacity to prepare their own food (like most plants) and henceforth are designated as primary food producers; those who depend on these primary food producers for food form the primary consumers’ class (herbivores). Some of the organisms relying on the primary food are the secondary food consumers (carnivores). There is a third class of consumers called tertiary food consumers/apex food consumers that feed on both the primary and secondary food consumers. Humans form an essential part of the apex predators and are generally at the top of the food chain. But still further disintegration of the food habits of the modern human i.e. Homo sapiens, reveals that humans depend on other individuals for preparing their own food. The old notion of eating raw/brute food is long gone and food processing has become very trenchant in lives of modern human. This has led to an increase in dependence on other individuals for ‘processing’ the food before it can be actually consumed by the modern human. This has led to a further shift of humans in the classification of food chain of consumers. The effects of the shifts shall be systematically investigated in this paper. The processing of food has a direct impact on the economy of the individual (consumer). Also most individuals depend on other processing individuals for the preparation of food. This dependency leads to establishment of a vital link of dependency in the food web which when altered can adversely affect the food web and can have dire consequences on the health of the individual. This study investigates the challenges arising out due to this dependency and the impact of food processing on the economy of the individual. A comparison of Industrial food processing and processing at domestic platforms (households and restaurants) has been made to provide an idea about the present scenario of automation in the food processing sector. A lot of time and energy is also consumed while processing food at home for consumption. The high frequency of consumption of meals (greater than 2 times a day) makes it even more laborious. Through the medium of this study a pressing need for development of an automatic cooking machine is proposed with a mission to reduce the inter-dependency & human effort of individuals required for the preparation of food (by automation of the food preparation process) and make them more self-reliant The impact of development of this product has also further been profoundly discussed. Assumption used: The individuals those who process food also consume the food that they produce. (They are also termed as ‘independent’ or ‘self-reliant’ modern human beings.)

Keywords: automation, food processing, impact on economy, processing individual

Procedia PDF Downloads 470
1898 Exploring Solutions in Extended Horava-Lifshitz Gravity

Authors: Aziza Altaibayeva, Ertan Güdekli, Ratbay Myrzakulov

Abstract:

In this letter, we explore exact solutions for the Horava-Lifshitz gravity. We use of an extension of this theory with first order dynamical lapse function. The equations of motion have been derived in a fully consistent scenario. We assume that there are some spherically symmetric families of exact solutions of this extended theory of gravity. We obtain exact solutions and investigate the singularity structures of these solutions. Specially, an exact solution with the regular horizon is found.

Keywords: quantum gravity, Horava-Lifshitz gravity, black hole, spherically symmetric space times

Procedia PDF Downloads 581
1897 Activation Parameters of the Low Temperature Creep Controlling Mechanism in Martensitic Steels

Authors: M. Münch, R. Brandt

Abstract:

Martensitic steels with an ultimate tensile strength beyond 2000 MPa are applied in the powertrain of vehicles due to their excellent fatigue strength and high creep resistance. However, the creep controlling mechanism in martensitic steels at ambient temperatures up to 423 K is not evident. The purpose of this study is to review the low temperature creep (LTC) behavior of martensitic steels at temperatures from 363 K to 523 K. Thus, the validity of a logarithmic creep law is reviewed and the stress and temperature dependence of the creep parameters α and β are revealed. Furthermore, creep tests are carried out, which include stepped changes in temperature or stress, respectively. On one hand, the change of the creep rate due to a temperature step provides information on the magnitude of the activation energy of the LTC controlling mechanism and on the other hand, the stress step approach provides information on the magnitude of the activation volume. The magnitude, the temperature dependency, and the stress dependency of both material specific activation parameters may deliver a significant contribution to the disclosure of the nature of the LTC rate controlling mechanism.

Keywords: activation parameters, creep mechanisms, high strength steels, low temperature creep

Procedia PDF Downloads 171
1896 A Coupled Extended-Finite-Discrete Element Method: On the Different Contact Schemes between Continua and Discontinua

Authors: Shervin Khazaeli, Shahab Haj-zamani

Abstract:

Recently, advanced geotechnical engineering problems related to soil movement, particle loss, and modeling of local failure (i.e. discontinua) as well as modeling the in-contact structures (i.e. continua) are of the great interest among researchers. The aim of this research is to meet the requirements with respect to the modeling of the above-mentioned two different domains simultaneously. To this end, a coupled numerical method is introduced based on Discrete Element Method (DEM) and eXtended-Finite Element Method (X-FEM). In the coupled procedure, DEM is employed to capture the interactions and relative movements of soil particles as discontinua, while X-FEM is utilized to model in-contact structures as continua, which may consist of different types of discontinuities. For verification purposes, the new coupled approach is utilized to examine benchmark problems including different contacts between/within continua and discontinua. Results are validated by comparison with those of existing analytical and numerical solutions. This study proves that extended-finite-discrete element method can be used to robustly analyze not only contact problems, but also other types of discontinuities in continua such as (i) crack formations and propagations, (ii) voids and bimaterial interfaces, and (iii) combination of previous cases. In essence, the proposed method can be used vastly in advanced soil-structure interaction problems to investigate the micro and macro behaviour of the surrounding soil and the response of the embedded structure that contains discontinuities.

Keywords: contact problems, discrete element method, extended-finite element method, soil-structure interaction

Procedia PDF Downloads 505
1895 Estimation of the Temperatures in an Asynchronous Machine Using Extended Kalman Filter

Authors: Yi Huang, Clemens Guehmann

Abstract:

In order to monitor the thermal behavior of an asynchronous machine with squirrel cage rotor, a 9th-order extended Kalman filter (EKF) algorithm is implemented to estimate the temperatures of the stator windings, the rotor cage and the stator core. The state-space equations of EKF are established based on the electrical, mechanical and the simplified thermal models of an asynchronous machine. The asynchronous machine with simplified thermal model in Dymola is compiled as DymolaBlock, a physical model in MATLAB/Simulink. The coolant air temperature, three-phase voltages and currents are exported from the physical model and are processed by EKF estimator as inputs. Compared to the temperatures exported from the physical model of the machine, three parts of temperatures can be estimated quite accurately by the EKF estimator. The online EKF estimator is independent from the machine control algorithm and can work under any speed and load condition if the stator current is nonzero current system.

Keywords: asynchronous machine, extended Kalman filter, resistance, simulation, temperature estimation, thermal model

Procedia PDF Downloads 285
1894 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 154
1893 Encapsulation of Volatile Citronella Essential oil by Coacervation: Efficiency and Release Kinetic Study

Authors: Rafeqah Raslan, Mastura AbdManaf, Junaidah Jai, Istikamah Subuki, Ana Najwa Mustapa

Abstract:

The volatile citronella essential oil was encapsulated by simple coacervation and complex coacervation using gum Arabic and gelatin as wall material. Glutaraldehyde was used in the methodology as crosslinking agent. The citronella standard calibration graph was developed with R2 equal to 0.9523 for the accurate determination of encapsulation efficiency and release study. The release kinetic was analyzed based on Fick’s law of diffusion for polymeric system and linear graph of log fraction release over log time was constructed to determine the release rate constant, k and diffusion coefficient, n. Both coacervation methods in the present study produce encapsulation efficiency around 94%. The capsules morphology analysis supported the release kinetic mechanisms of produced capsules for both coacervation process.

Keywords: simple coacervation, complex coacervation, encapsulation efficiency, release kinetic study

Procedia PDF Downloads 316
1892 Stress Concentration Trend for Combined Loading Conditions

Authors: Aderet M. Pantierer, Shmuel Pantierer, Raphael Cordina, Yougashwar Budhoo

Abstract:

Stress concentration occurs when there is an abrupt change in geometry, a mechanical part under loading. These changes in geometry can include holes, notches, or cracks within the component. The modifications create larger stress within the part. This maximum stress is difficult to determine, as it is directly at the point of the minimum area. Strain gauges have yet to be developed to analyze stresses at such minute areas. Therefore, a stress concentration factor must be utilized. The stress concentration factor is a dimensionless parameter calculated solely on the geometry of a part. The factor is multiplied by the nominal, or average, stress of the component, which can be found analytically or experimentally. Stress concentration graphs exist for common loading conditions and geometrical configurations to aid in the determination of the maximum stress a part can withstand. These graphs were developed from historical data yielded from experimentation. This project seeks to verify a stress concentration graph for combined loading conditions. The aforementioned graph was developed using CATIA Finite Element Analysis software. The results of this analysis will be validated through further testing. The 3D modeled parts will be subjected to further finite element analysis using Patran-Nastran software. The finite element models will then be verified by testing physical specimen using a tensile testing machine. Once the data is validated, the unique stress concentration graph will be submitted for publication so it can aid engineers in future projects.

Keywords: stress concentration, finite element analysis, finite element models, combined loading

Procedia PDF Downloads 444
1891 Mechanical Properties and Microstructures of the Directional Solidified Zn-Al-Cu Alloy

Authors: Mehmet Izzettin Yilmazer, Emin Cadirli

Abstract:

Zn-7wt.%Al-2.96wt.%Cu eutectic alloy was directionally solidified upwards with different temperature gradients (from 6.70 K/mm to 10.67 K/mm) at a constant growth rate (16.4 Km/s) and also different growth rate (from 8.3 micron/s to 166 micron/s) at a constant temperature gradient (10.67 K/mm) using a Bridgman–type growth apparatus.The values of eutectic spacing were measured from longitudinal and transverse sections of the samples. The dependency of microstructures on the G and V were determined with linear regression analysis and experimental equations were found as λl=8.953xVexp-0.49, λt=5.942xVexp-0.42 and λl=0.008xGexp-1.23, λt=0.024xGexp-0.93. The measurements of microhardness of directionally solidified samples were obtained by using a microhardness test device. The dependence of microhardness HV on temperature gradient and growth rate were analyzed. The dependency of microhardness on the G and V were also determined with linear regression analysis as HVl=110.66xVexp0.02, HVt=111.94xVexp0.02 and HVl=69.66xGexp0.17, HVt=68.86xGexp0.18. The experimental results show that the microhardness of the directionally solidified Zn-Al-Cu alloy increases with increasing the growth rate. The results obtained in this work were compared with the previous similar experimental results.

Keywords: directional solidification, eutectic alloys, microstructure, microhardness

Procedia PDF Downloads 451
1890 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire

Authors: Vinay A. Sharma, Shiva Prasad H. C.

Abstract:

The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.

Keywords: decision-making, leadership, logic, strategic management

Procedia PDF Downloads 108
1889 Human Posture Estimation Based on Multiple Viewpoints

Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo

Abstract:

This study aimed to address the problem of improving the confidence of key points by fusing multi-view information, thereby estimating human posture more accurately. We first obtained multi-view image information and then used the MvP algorithm to fuse this multi-view information together to obtain a set of high-confidence human key points. We used these as the input for the Spatio-Temporal Graph Convolution (ST-GCN). ST-GCN is a deep learning model used for processing spatio-temporal data, which can effectively capture spatio-temporal relationships in video sequences. By using the MvP algorithm to fuse multi-view information and inputting it into the spatio-temporal graph convolution model, this study provides an effective method to improve the accuracy of human posture estimation and provides strong support for further research and application in related fields.

Keywords: multi-view, pose estimation, ST-GCN, joint fusion

Procedia PDF Downloads 70
1888 A Combinatorial Representation for the Invariant Measure of Diffusion Processes on Metric Graphs

Authors: Michele Aleandri, Matteo Colangeli, Davide Gabrielli

Abstract:

We study a generalization to a continuous setting of the classical Markov chain tree theorem. In particular, we consider an irreducible diffusion process on a metric graph. The unique invariant measure has an atomic component on the vertices and an absolutely continuous part on the edges. We show that the corresponding density at x can be represented by a normalized superposition of the weights associated to metric arborescences oriented toward the point x. A metric arborescence is a metric tree oriented towards its root. The weight of each oriented metric arborescence is obtained by the product of the exponential of integrals of the form ∫a/b², where b is the drift and σ² is the diffusion coefficient, along the oriented edges, for a weight for each node determined by the local orientation of the arborescence around the node and for the inverse of the diffusion coefficient at x. The metric arborescences are obtained by cutting the original metric graph along some edges.

Keywords: diffusion processes, metric graphs, invariant measure, reversibility

Procedia PDF Downloads 172
1887 Efficient Heuristic Algorithm to Speed Up Graphcut in Gpu for Image Stitching

Authors: Tai Nguyen, Minh Bui, Huong Ninh, Tu Nguyen, Hai Tran

Abstract:

GraphCut algorithm has been widely utilized to solve various types of computer vision problems. Its expensive computational cost encouraged many researchers to improve the speed of the algorithm. Recent works proposed schemes that work on parallel computing platforms such as CUDA. However, the problem of low convergence speed prevents the usage of GraphCut for real time applications. In this paper, we propose global suppression heuristic to boost the conver-gence process of the algorithm. A parallel implementation of GraphCut algorithm on CUDA designed for the image stitching problem is introduced. Our method achieves up to 3× time boost on the graph of size 80 × 480 compared to the best sequential GraphCut algorithm while achieving satisfactory stitched images, suitable for panorama applications. Our source code will be soon available for further research.

Keywords: CUDA, graph cut, image stitching, texture synthesis, maxflow/mincut algorithm

Procedia PDF Downloads 132
1886 China's Middle East Policy and the Competition with the United States

Authors: Shabnam Dadparvar, Laijin Shen

Abstract:

This paper focuses on China’s policy in the Middle East and the rivalry with the U.S. The question is that what are the main factors on China’s Middle East policy and its competition with the U.S? The hypothesis regards to three effective factors: 'China’s energy dependency' on the Middle East, 'economy' and support for 'stability' in the Middle East. What is important in China’s competition with the U.S regarding to its Middle East policy is the substantial difference in ways of treating the countries of the region; China is committed to Westphalia model based on non-interference in internal affairs of the countries and respect the sovereignty of the governments. However, after 9/11, the U.S is seeking a balance between stability and change through intervention in the international affairs and in some cases is looking for a regime change. From the other hand, China, due to its dependency on the region’s energy welcomes America’s military presence in the region for providing stability. The authors by using a descriptive analytical method try to explain the situation of rivalry between China and the United States in Middle East. China is an 'emerging power' with high economic growth and in demand of more energy supply. The problem is that a rising power in the region is often a source of concern for hegemony.

Keywords: China's foreign policy, energy, hegemony, the Middle East

Procedia PDF Downloads 352
1885 Comparison Between Two Techniques (Extended Source to Surface Distance & Field Alignment) Of Craniospinal Irradiation (CSI) In the Eclipse Treatment Planning System

Authors: Naima Jannat, Ariful Islam, Sharafat Hossain

Abstract:

Due to the involvement of the large target volume, Craniospinal Irradiation makes it challenging to achieve a uniform dose, and it requires different isocenters. This isocentric junction needs to shift after every five fractions to overcome the possibility of hot and cold spots. This study aims to evaluate the Planning Target Volume coverage & sparing Organ at Risk between two techniques and shows that the Field Alignment Technique does not need replanning and resetting. Planning method for Craniospinal Irradiation by Eclipse treatment planning system Field Alignment and Extended Source to Surface Distance technique was developed where 36 Gy in 20 Fraction at the rate of 1.8 Gy was prescribed. The patient was immobilized in the prone position. In the Field Alignment technique, the plan consists of half beam blocked parallel opposed cranium and a single posterior cervicospine field was developed by sharing the same isocenter, which obviates divergence matching. Further, a single field was created to treat the remaining lumbosacral spine. Matching between the inferior diverging edge of the cervicospine field and the superior diverging edge of a lumbosacral field, the field alignment option was used, which automatically matches the field edge divergence as per the field alignment rule in Eclipse Treatment Planning System where the couch was set to 2700. In the Extended Source to Surface Distance technique, two parallel opposed fields were created for the cranium, and a single posterior cervicospine field was created where the Source to Surface Distance was from 120-140 cm. Dose Volume Histograms were obtained for each organ contoured and for each technique used. In all, the patient’s maximum dose to Planning Target Volume is higher for the Extended Source to Surface Distance technique to Field Alignment technique. The dose to all surrounding structures was increased with the use of a single Extended Source to Surface Distance when compared to the Field Alignment technique. The average mean dose to Eye, Brain Steam, Kidney, Oesophagus, Heart, Liver, Lung, and Ovaries were respectively (58% & 60 %), (103% & 98%), (13% & 15%), (10% & 63%), (12% & 16%), (33% & 30%), (14% & 18%), (69% & 61%) for Field Alignment and Extended Source to Surface Distance technique. However, the clinical target volume at the spine junction site received a less homogeneous dose with the Field Alignment technique as compared to Extended Source to Surface Distance. We conclude that, although the use of a single field Extended Source to Surface Distance delivered a more homogenous, but its maximum dose is higher than the Field Alignment technique. Also, a huge advantage of the Field Alignment technique for Craniospinal Irradiation is that it doesn’t need replanning and resetting up of patients after every five fractions and 95% prescribed dose was received by more than 95% of the Planning Target Volume in all the plane with the acceptable hot spot.

Keywords: craniospinalirradiation, cranium, cervicospine, immobilize, lumbosacral spine

Procedia PDF Downloads 116
1884 Green Energy, Fiscal Incentives and Conflicting Signals: Analysing the Challenges Faced in Promoting on Farm Waste to Energy Projects

Authors: Hafez Abdo, Rob Ackrill

Abstract:

Renewable energy (RE) promotion in the UK relies on multiple policy instruments, which are required to overcome the path dependency pressures favouring fossil fuels. These instruments include targeted funding schemes and economy-wide instruments embedded in the tax code. The resulting complexity of incentives raises important questions around the coherence and effectiveness of these instruments for RE generation. This complexity is exacerbated by UK RE policy being nested within EU policy in a multi-level governance (MLG) setting. To gain analytical traction on such complexity, this study will analyse policies promoting the on-farm generation of energy for heat and power, from farm and food waste, via anaerobic digestion. Utilising both primary and secondary data, it seeks to address a particular lacuna in the academic literature. Via a localised, in-depth investigation into the complexity of policy instruments promoting RE, this study will help our theoretical understanding of the challenges that MLG and path dependency pressures present to policymakers of multi-dimensional policies.

Keywords: anaerobic digestion, energy, green, policy, renewable, tax, UK

Procedia PDF Downloads 370
1883 An Iberian Study about Location of Parking Areas for Dangerous Goods

Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio

Abstract:

When lorries transport dangerous goods, there exist some legal stipulations in the European Union for assuring the security of the rest of road users as well as of those goods being transported. At this respect, lorry drivers cannot park in usual parking areas, because they must use parking areas with special conditions, including permanent supervision of security personnel. Moreover, drivers are compelled to satisfy additional regulations about resting and driving times, which involve in the practical possibility of reaching the suitable parking areas under these time parameters. The “European Agreement concerning the International Carriage of Dangerous Goods by Road” (ADR) is the basic regulation on transportation of dangerous goods imposed under the recommendations of the United Nations Economic Commission for Europe. Indeed, nowadays there are no enough parking areas adapted for dangerous goods and no complete study have suggested the best locations to build new areas or to adapt others already existing to provide the areas being necessary so that lorry drivers can follow all the regulations. The goal of this paper is to show how many additional parking areas should be built in the Iberian Peninsula to allow that lorry drivers may park in such areas under their restrictions in resting and driving time. To do so, we have modeled the problem via graph theory and we have applied a new efficient algorithm which determines an optimal solution for the problem of locating new parking areas to complement those already existing in the ADR for the Iberian Peninsula. The solution can be considered minimal since the number of additional parking areas returned by the algorithm is minimal in quantity. Obviously, graph theory is a natural way to model and solve the problem here proposed because we have considered as nodes: the already-existing parking areas, the loading-and-unloading locations and the bifurcations of roads; while each edge between two nodes represents the existence of a road between both nodes (the distance between nodes is the edge's weight). Except for bifurcations, all the nodes correspond to parking areas already existing and, hence, the problem corresponds to determining the additional nodes in the graph such that there are less up to 100 km between two nodes representing parking areas. (maximal distance allowed by the European regulations).

Keywords: dangerous goods, parking areas, Iberian peninsula, graph-based modeling

Procedia PDF Downloads 580
1882 A Study on Mesh Size Dependency on Bed Expansion Zone in a Three-Phase Fluidized Bed Reactor

Authors: Liliana Patricia Olivo Arias

Abstract:

The present study focused on the hydrodynamic study in a three-phase fluidized bed reactor and the influence of important aspects, such as volume fractions (Hold up), velocity magnitude of gas, liquid and solid phases (hydrogen, gasoil, and gamma alumina), interactions of phases, through of drag models with the k-epsilon turbulence model. For this purpose was employed a Euler-Euler model and also considers the system is constituted of three phases, gaseous, liquid and solid, characterized by its physical and thermal properties, the transport processes that are developed within the transient regime. The proposed model of the three-phase fluidized bed reactor was solved numerically using the ANSYS-Fluent software with different mesh refinements on bed expansion zone in order to observe the influence of the hydrodynamic parameters and convergence criteria. With this model and the numerical simulations obtained for its resolution, it was possible to predict the results of the volume fractions (Hold ups) and the velocity magnitude for an unsteady system from the initial and boundaries conditions were established.

Keywords: three-phase fluidized bed system, CFD simulation, mesh dependency study, hydrodynamic study

Procedia PDF Downloads 166