Search results for: ERA-5 analysis data
36214 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches
Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang
Abstract:
Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis
Procedia PDF Downloads 40536213 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach
Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed
Abstract:
Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model
Procedia PDF Downloads 46636212 Initiative Strategies on How to Increase Value Add of the Recycling Business
Authors: Yananda Siraphatthada
Abstract:
The current study was the succession of a previous study on value added of recycling business management. Its aims are to 1) explore conditions on how to increasing value add of Thai recycling business, and 2) exam the implementation of the 3-staged plan (short, medium, and long term), suggested by the former study, to increase value added of the recycling business as immediate mechanisms to accelerate government operation. Quantitative and qualitative methods were utilized in this research. A qualitative research consisted of in-depth interviews and focus group discussions. Responses were obtained from owners of the waste separation plants, and recycle shops, as well as officers in relevant governmental agencies. They were randomly selected via Quota Sampling. Data was analyzed via content analysis. The sample used for quantitative method consisted of 1,274 licensed recycling operators in eight provinces. The operators were randomly stratified via sampling method. Data were analyzed via descriptive statistics frequency, percentage, average (mean), and standard deviation. The study recommended three-staged plan: short, medium, and long terms. The plan included the development of logistics, the provision of quality market/plants, the amendment of recycling rules/regulation, the restructuring recycling business, the establishment of green-purchasing recycling center, support for the campaigns run by the International Green Purchasing Network (IGPN), conferences/workshops as a public forum to share insights among experts/concern people.Keywords: strategies, value added, recycle, business
Procedia PDF Downloads 24936211 From Abraham to Average Man: Game Theoretic Analysis of Divine Social Relationships
Authors: Elizabeth Latham
Abstract:
Billions of people worldwide profess some feeling of psychological or spiritual connection with the divine. The majority of them attribute this personal connection to the God of the Christian Bible. The objective of this research was to discover what could be known about the exact social nature of these relationships and to see if they mimic the interactions recounted in the bible; if a worldwide majority believes that the Christian Bible is a true account of God’s interactions with mankind, it is reasonable to assume that the interactions between God and the aforementioned people would be similar to the ones in the bible. This analysis required the employment of an unusual method of biblical analysis: Game Theory. Because the research focused on documented social interaction between God and man in scripture, it was important to go beyond text-analysis methods. We used stories from the New Revised Standard Version of the bible to set up “games” using economics-style matrices featuring each player’s motivations and possible courses of action, modeled after interactions in the Old and New Testaments between the Judeo-Christian God and some mortal person. We examined all relevant interactions for the objectives held by each party and their strategies for obtaining them. These findings were then compared to similar “games” created based on interviews with people subscribing to different levels of Christianity who ranged from barely-practicing to clergymen. The range was broad so as to look for a correlation between scriptural knowledge and game-similarity to the bible. Each interview described a personal experience someone believed they had with God and matrices were developed to describe each one as social interaction: a “game” to be analyzed quantitively. The data showed that in most cases, the social features of God-man interactions in the modern lives of people were like those present in the “games” between God and man in the bible. This similarity was referred to in the study as “biblical faith” and it alone was a fascinating finding with many implications. The even more notable finding, however, was that the amount of game-similarity present did not correlate with the amount of scriptural knowledge. Each participant was also surveyed on family background, political stances, general education, scriptural knowledge, and those who had biblical faith were not necessarily the ones that knew the bible best. Instead, there was a high degree of correlation between biblical faith and family religious observance. It seems that to have a biblical psychological relationship with God, it is more important to have a religious family than to have studied scripture, a surprising insight with massive implications on the practice and preservation of religion.Keywords: bible, Christianity, game theory, social psychology
Procedia PDF Downloads 16036210 Anaerobic Digestion Batch Study of Taxonomic Variations in Microbial Communities during Adaptation of Consortium to Different Lignocellulosic Substrates Using Targeted Sequencing
Authors: Priyanka Dargode, Suhas Gore, Manju Sharma, Arvind Lali
Abstract:
Anaerobic digestion has been widely used for production of methane from different biowastes. However, the complexity of microbial communities involved in the process is poorly understood. The performance of biogas production process concerning the process productivity is closely coupled to its microbial community structure and syntrophic interactions amongst the community members. The present study aims at understanding taxonomic variations occurring in any starter inoculum when acclimatised to different lignocellulosic biomass (LBM) feedstocks relating to time of digestion. The work underlines use of high throughput Next Generation Sequencing (NGS) for validating the changes in taxonomic patterns of microbial communities. Biomethane Potential (BMP) batches were set up with different pretreated and non-pretreated LBM residues using the same microbial consortium and samples were withdrawn for studying the changes in microbial community in terms of its structure and predominance with respect to changes in metabolic profile of the process. DNA of samples withdrawn at different time intervals with reference to performance changes of the digestion process, was extracted followed by its 16S rRNA amplicon sequencing analysis using Illumina Platform. Biomethane potential and substrate consumption was monitored using Gas Chromatography(GC) and reduction in COD (Chemical Oxygen Demand) respectively. Taxonomic analysis by QIIME server data revealed that microbial community structure changes with different substrates as well as at different time intervals. It was observed that biomethane potential of each substrate was relatively similar but, the time required for substrate utilization and its conversion to biomethane was different for different substrates. This could be attributed to the nature of substrate and consequently the discrepancy between the dominance of microbial communities with regards to different substrate and at different phases of anaerobic digestion process. Knowledge of microbial communities involved would allow a rational substrate specific consortium design which will help to reduce consortium adaptation period and enhance the substrate utilisation resulting in improved efficacy of biogas process.Keywords: amplicon sequencing, biomethane potential, community predominance, taxonomic analysis
Procedia PDF Downloads 53836209 Analysis of Tilting Cause of a Residential Building in Durres by the Use of Cptu Test
Authors: Neritan Shkodrani
Abstract:
On November 26, 2019, an earthquake hit the central western part of Albania. It was assessed as Mw 6.4. Its epicenter was located offshore north western Durrës, about 7 km north of the city. In this paper, the consequences of settlements of very soft soils have been discussed for the case of a residential building, mentioned as “K Building”, which was suffering a significant tilting after the earthquake. “KBuilding” is an RC framed building having 12+1 (basement) storiesand a floor area of 21000 m2. The construction of the building was completed in 2012. “KBuilding”, located in Durres city, suffered severe non-structural damage during November 26, 2019, Durrës Earthquake sequences. During the in-site inspections immediately after the earthquake, the general condition of the buildings, the presence of observable settlements on the ground, and the crack situation in the structure were determined, and damage inspection were performed. It was significant to note that the “K Building” presented tilting that might be attributed, as it was believed at the beginning, partially to the failure of the columns of the ground floor and partially to liquefaction phenomena, but it did not collapse. At the first moment was not clear if the foundation had a bearing capacity failure or the foundation failed because of the soil liquefaction. Geotechnical soil investigations by using CPTU test were executed, and their data are usedto evaluatebearing capacity, consolidation settlement of the mat foundation, and soil liquefaction since they were believed to be the main reasons of this building tilting.Geotechnical soil investigation consist in 5 (five) Static Cone Penetration tests with pore pressure measurement (piezocone test). They reached a penetration depth of 20.0 m to 30.0 mand, clearly shown the presence of very soft and organic soils in the soil profile of the site. Geotechnical CPT based analysis of bearing capacity, consolidation, and secondary settlement are applied, and results are reported for each test. These results shown very small values of allowable bearing capacity and very high values of consolidation and secondary settlements. Liquefaction analysis based on the data of CPTU tests and the characteristics of ground shaking of the mentioned earthquake has shown the possibility of liquefaction for some layers of the considered soil profile, but the estimated vertical settlements are at a small range and clearly shown that the main reason of the building tilting was not related to the consequences of liquefaction, but was an existing settlement caused from the applied bearing pressure of this building. All the CPTU tests were carried out on August 2021, almost two years after the November 26, 2019, Durrës Earthquake and when the building itself was demolished. After removing the mat foundation on September 2021, it was possible to carry out CPTU tests even on the footprint of the existing building, which made possible to observe the effects of long time applied of foundation bearing pressure to the consolidation on the considered soil profile.Keywords: bearing capacity, cone penetration test, consolidation settlement, secondary settlement, soil liquefaction, etc
Procedia PDF Downloads 10036208 Mapping Thermal Properties Using Resistivity, Lithology and Thermal Conductivity Measurements
Authors: Riccardo Pasquali, Keith Harlin, Mark Muller
Abstract:
The ShallowTherm project is focussed on developing and applying a methodology for extrapolating relatively sparsely sampled thermal conductivity measurements across Ireland using mapped Litho-Electrical (LE) units. The primary data used consist of electrical resistivities derived from the Geological Survey Ireland Tellus airborne electromagnetic dataset, GIS-based maps of Irish geology, and rock thermal conductivities derived from both the current Irish Ground Thermal Properties (IGTP) database and a new programme of sampling and laboratory measurement. The workflow has been developed across three case-study areas that sample a range of different calcareous, arenaceous, argillaceous, and volcanic lithologies. Statistical analysis of resistivity data from individual geological formations has been assessed and integrated with detailed lithological descriptions to define distinct LE units. Thermal conductivity measurements from core and hand samples have been acquired for every geological formation within each study area. The variability and consistency of thermal conductivity measurements within each LE unit is examined with the aim of defining a characteristic thermal conductivity (or range of thermal conductivities) for each LE unit. Mapping of LE units, coupled with characteristic thermal conductivities, provides a method of defining thermal conductivity properties at a regional scale and facilitating the design of ground source heat pump closed-loop collectors.Keywords: thermal conductivity, ground source heat pumps, resistivity, heat exchange, shallow geothermal, Ireland
Procedia PDF Downloads 18936207 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach
Authors: Nina Ponikvar, Katja Zajc Kejžar
Abstract:
While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia
Procedia PDF Downloads 8136206 A Study on the Performance of 2-PC-D Classification Model
Authors: Nurul Aini Abdul Wahab, Nor Syamim Halidin, Sayidatina Aisah Masnan, Nur Izzati Romli
Abstract:
There are many applications of principle component method for reducing the large set of variables in various fields. Fisher’s Discriminant function is also a popular tool for classification. In this research, the researcher focuses on studying the performance of Principle Component-Fisher’s Discriminant function in helping to classify rice kernels to their defined classes. The data were collected on the smells or odour of the rice kernel using odour-detection sensor, Cyranose. 32 variables were captured by this electronic nose (e-nose). The objective of this research is to measure how well a combination model, between principle component and linear discriminant, to be as a classification model. Principle component method was used to reduce all 32 variables to a smaller and manageable set of components. Then, the reduced components were used to develop the Fisher’s Discriminant function. In this research, there are 4 defined classes of rice kernel which are Aromatic, Brown, Ordinary and Others. Based on the output from principle component method, the 32 variables were reduced to only 2 components. Based on the output of classification table from the discriminant analysis, 40.76% from the total observations were correctly classified into their classes by the PC-Discriminant function. Indirectly, it gives an idea that the classification model developed has committed to more than 50% of misclassifying the observations. As a conclusion, the Fisher’s Discriminant function that was built on a 2-component from PCA (2-PC-D) is not satisfying to classify the rice kernels into its defined classes.Keywords: classification model, discriminant function, principle component analysis, variable reduction
Procedia PDF Downloads 33736205 Boundary Conditions for 2D Site Response Analysis in OpenSees
Authors: M. Eskandarighadi, C. R. McGann
Abstract:
It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristicssuch as frequency content, amplitude, and duration of seismic waves. The most common method for investigating site response is one-dimensional seismic site response analysis. The infinite horizontal length of the model and the homogeneous characteristic of the soil are crucial assumptions of this method. One boundary condition that can be used in the sides is tying the sides horizontally for vertical 1D wave propagation. However, 1D analysis cannot account for the 2D nature of wave propagation in the condition where the soil profile is not fully horizontal or has heterogeneity within layers. Therefore, 2D seismic site response analysis can be used to take all of these limitations into account for a better understanding of local site conditions. Different types of boundary conditions can be appliedin 2D site response models, such as tied boundary condition, massive columns, and free-field boundary condition. The tied boundary condition has been used in 1D analysis, which is useful for 1D wave propagation. Employing two massive columns at the sides is another approach for capturing the 2D nature of wave propagation. Free-field boundary condition can simulate the free-field motion that would exist far from the domain of interest. The goal for free-field boundary condition is to minimize the unwanted reflection from sides. This research focuses on the comparison between these methods with examples and discusses the details and limitations of each of these boundary conditions.Keywords: boundary condition, free-field, massive columns, opensees, site response analysis, wave propagation
Procedia PDF Downloads 19136204 An Automatic Bayesian Classification System for File Format Selection
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.Keywords: data mining, digital libraries, digital preservation, file format
Procedia PDF Downloads 50136203 Decoding WallStreetBets: The Impact of Daily Disagreements on Trading Volumes
Authors: F. Ghandehari, H. Lu, L. El-Jahel, D. Jayasuriya
Abstract:
Disagreement among investors is a fundamental aspect of financial markets, significantly influencing market dynamics. Measuring this disagreement has traditionally posed challenges, often relying on proxies like analyst forecast dispersion, which are limited by biases and infrequent updates. Recent movements in social media indicate that retail investors actively seek financial advice online and can influence the stock market. The evolution of the investing landscape, particularly the rise of social media as a hub for financial advice, provides an alternative avenue for real-time measurement of investor sentiment and disagreement. Platforms like Reddit offer rich, community-driven discussions that reflect genuine investor opinions. This research explores how social media empowers retail investors and the potential of leveraging textual analysis of social media content to capture daily fluctuations in investor disagreement. This study investigates the relationship between daily investor disagreement and trading volume, focusing on the role of social media platforms in shaping market dynamics, specifically using data from WallStreetBets (WSB) on Reddit. This paper uses data from 2020 to 2023 from WSB and analyses 4,896 firms with enough social media activity in WSB to define stock-day level disagreement measures. Consistent with traditional theories that disagreement induces trading volume, the results show significant evidence supporting this claim through different disagreement measures derived from WSB discussions.Keywords: disagreement, retail investor, social finance, social media
Procedia PDF Downloads 4436202 Tuning Cubic Equations of State for Supercritical Water Applications
Authors: Shyh Ming Chern
Abstract:
Cubic equations of state (EoS), popular due to their simple mathematical form, ease of use, semi-theoretical nature and, reasonable accuracy are normally fitted to vapor-liquid equilibrium P-v-T data. As a result, They often show poor accuracy in the region near and above the critical point. In this study, the performance of the renowned Peng-Robinson (PR) and Patel-Teja (PT) EoS’s around the critical area has been examined against the P-v-T data of water. Both of them display large deviations at critical point. For instance, PR-EoS exhibits discrepancies as high as 47% for the specific volume, 28% for the enthalpy departure and 43% for the entropy departure at critical point. It is shown that incorporating P-v-T data of the supercritical region into the retuning of a cubic EoS can improve its performance above the critical point dramatically. Adopting a retuned acentric factor of 0.5491 instead of its genuine value of 0.344 for water in PR-EoS and a new F of 0.8854 instead of its original value of 0.6898 for water in PT-EoS reduces the discrepancies to about one third or less.Keywords: equation of state, EoS, supercritical water, SCW
Procedia PDF Downloads 54436201 Identification of Electric Energy Storage Acceptance Types: Empirical Findings from the German Manufacturing Industry
Authors: Dominik Halstrup, Marlene Schriever
Abstract:
The industry, as one of the main energy consumer, is of critical importance along the way of transforming the energy system to Renewable Energies. The distributed character of the Energy Transition demands for further flexibility being introduced to the grid. In order to shed further light on the acceptance of Electric Energy Storage (ESS) from an industrial point of view, this study therefore examines the German manufacturing industry. The analysis in this paper uses data composed of a survey amongst 101 manufacturing companies in Germany. Being part of a two-stage research design, both qualitative and quantitative data was collected. Based on a literature review an acceptance concept was developed in the paper and four user-types identified: (Dedicated) User, Impeded User, Forced User and (Dedicated) Non-User and incorporated in the questionnaire. Both descriptive and bivariate analysis is deployed to identify the level of acceptance in the different organizations. After a factor analysis has been conducted, variables were grouped to form independent acceptance factors. Out of the 22 organizations that do show a positive attitude towards ESS, 5 have already implemented ESS and show a positive attitude towards ESS. They can be therefore considered ‘Dedicated Users’. The remaining 17 organizations have a positive attitude but have not implemented ESS yet. The results suggest that profitability plays an important role as well as load-management systems that are already in place. Surprisingly, 2 organizations have implemented ESS even though they have a negative attitude towards it. This is an example for a ‘Forced User’ where reasons of overriding importance or supporters with overriding authority might have forced the company to implement ESS. By far the biggest subset of the sample shows (critical) distance and can therefore be considered ‘(Dedicated) Non-Users’. The results indicate that the majority of the respondents have not thought ESS in their own organization through yet. For the majority of the sample one can therefore not speak of critical distance but rather a distance due to insufficient information and the perceived unprofitability. This paper identifies the relative state of acceptance of ESS in the manufacturing industry as well as current reasons for hindrance and perspectives for future growth of ESS in an industrial setting from a policy level. The interest that is currently generated by the media could be channeled and taken into a more substantial and individual discussion about ESS in an industrial setting. If the current perception of profitability could be addressed and communicated accordingly, ESS and their use in for instance cooperative business models could become a topic for more organizations in Germany and other parts of the world. As price mechanisms tend to favor existing technologies, policy makers need to further access the use of ESS and acknowledge the positive effects when integrated in an energy system. The subfields of generation, transmission and distribution become increasingly intertwined. New technologies and business models, such as ESS or cooperative arrangements entering the market, increase the number of stakeholders. Organizations need to find their place within this array of stakeholders.Keywords: acceptance, energy storage solutions, German energy transition, manufacturing industry
Procedia PDF Downloads 22736200 Linking Enhanced Resting-State Brain Connectivity with the Benefit of Desirable Difficulty to Motor Learning: A Functional Magnetic Resonance Imaging Study
Authors: Chien-Ho Lin, Ho-Ching Yang, Barbara Knowlton, Shin-Leh Huang, Ming-Chang Chiang
Abstract:
Practicing motor tasks arranged in an interleaved order (interleaved practice, or IP) generally leads to better learning than practicing tasks in a repetitive order (repetitive practice, or RP), an example of how desirable difficulty during practice benefits learning. Greater difficulty during practice, e.g. IP, is associated with greater brain activity measured by higher blood-oxygen-level dependent (BOLD) signal in functional magnetic resonance imaging (fMRI) in the sensorimotor areas of the brain. In this study resting-state fMRI was applied to investigate whether increase in resting-state brain connectivity immediately after practice predicts the benefit of desirable difficulty to motor learning. 26 healthy adults (11M/15F, age = 23.3±1.3 years) practiced two sets of three sequences arranged in a repetitive or an interleaved order over 2 days, followed by a retention test on Day 5 to evaluate learning. On each practice day, fMRI data were acquired in a resting state after practice. The resting-state fMRI data was decomposed using a group-level spatial independent component analysis (ICA), yielding 9 independent components (IC) matched to the precuneus network, primary visual networks (two ICs, denoted by I and II respectively), sensorimotor networks (two ICs, denoted by I and II respectively), the right and the left frontoparietal networks, occipito-temporal network, and the frontal network. A weighted resting-state functional connectivity (wRSFC) was then defined to incorporate information from within- and between-network brain connectivity. The within-network functional connectivity between a voxel and an IC was gauged by a z-score derived from the Fisher transformation of the IC map. The between-network connectivity was derived from the cross-correlation of time courses across all possible pairs of ICs, leading to a symmetric nc x nc matrix of cross-correlation coefficients, denoted by C = (pᵢⱼ). Here pᵢⱼ is the extremum of cross-correlation between ICs i and j; nc = 9 is the number of ICs. This component-wise cross-correlation matrix C was then projected to the voxel space, with the weights for each voxel set to the z-score that represents the above within-network functional connectivity. The wRSFC map incorporates the global characteristics of brain networks measured by the between-network connectivity, and the spatial information contained in the IC maps measured by the within-network connectivity. Pearson correlation analysis revealed that greater IP-minus-RP difference in wRSFC was positively correlated with the RP-minus-IP difference in the response time on Day 5, particularly in brain regions crucial for motor learning, such as the right dorsolateral prefrontal cortex (DLPFC), and the right premotor and supplementary motor cortices. This indicates that enhanced resting brain connectivity during the early phase of memory consolidation is associated with enhanced learning following interleaved practice, and as such wRSFC could be applied as a biomarker that measures the beneficial effects of desirable difficulty on motor sequence learning.Keywords: desirable difficulty, functional magnetic resonance imaging, independent component analysis, resting-state networks
Procedia PDF Downloads 20836199 System Detecting Border Gateway Protocol Anomalies Using Local and Remote Data
Authors: Alicja Starczewska, Aleksander Nawrat, Krzysztof Daniec, Jarosław Homa, Kacper Hołda
Abstract:
Border Gateway Protocol is the main routing protocol that enables routing establishment between all autonomous systems, which are the basic administrative units of the internet. Due to the poor protection of BGP, it is important to use additional BGP security systems. Many solutions to this problem have been proposed over the years, but none of them have been implemented on a global scale. This article describes a system capable of building images of real-time BGP network topology in order to detect BGP anomalies. Our proposal performs a detailed analysis of BGP messages that come into local network cards supplemented by information collected by remote collectors in different localizations.Keywords: BGP, BGP hijacking, cybersecurity, detection
Procedia PDF Downloads 8236198 A Lean Manufacturing Profile of Practices in the Metallurgical Industry: A Methodology for Multivariate Analysis
Authors: M. Jonathan D. Morales, R. Ramón Silva
Abstract:
The purpose of this project is to carry out an analysis and determine the profile of actual lean manufacturing processes in the Metropolitan Area of Bucaramanga. Through the analysis of qualitative and quantitative variables it was possible to establish how these manufacturers develop production practices that ensure their competitiveness and productivity in the market. In this study, a random sample of metallurgic and wrought iron companies was applied, following which a quantitative focus and analysis was used to formulate a qualitative methodology for measuring the level of lean manufacturing procedures in the industry. A qualitative evaluation was also carried out through a multivariate analysis using the Numerical Taxonomy System (NTSYS) program which should allow for the determination of Lean Manufacturing profiles. Through the results it was possible to observe how the companies in the sector are doing with respect to Lean Manufacturing Practices, as well as identify the level of management that these companies practice with respect to this topic. In addition, it was possible to ascertain that there is no one dominant profile in the sector when it comes to Lean Manufacturing. It was established that the companies in the metallurgic and wrought iron industry show low levels of Lean Manufacturing implementation. Each one carries out diverse actions that are insufficient to consolidate a sectoral strategy for developing a competitive advantage which enables them to tie together a production strategy.Keywords: production line management, metallurgic industry, lean manufacturing, productivity
Procedia PDF Downloads 46436197 Doing Bad for a Greater Good: Moral Disengagement in Social and Commercial Entrepreneurial Contexts
Authors: Thorsten Auer, Sumaya Islam, Sabrina Plaß, Colin Wooldridge
Abstract:
Whether individuals are more likely to forgo some ethical values if it is for a “great” social mission remains questionable. Research interest in the mechanism of moral disengagement has risen sharply in the organizational context over the last decades. Moral disengagement provides an explanatory approach to why individuals decide against their moral intent and describes the tendency to make unethical decisions due to a lack of self-regulation given various actions and their consequences. In our study, we examine the differences between individual decision-making given a commercial and social entrepreneurial context. Thereby, we investigate whether individuals in a social entrepreneurial context, characterized by pro-social goals and purpose beyond profit maximization, tend to make more or less “unethical” decisions in trade-off situations than those given a profit-focused commercial, entrepreneurial context. While a general priming effect may explain the tendency for individuals to make less unethical decisions given a social context, it remains unclear how individuals decide given a trade-off in that specific context. The trade-off in our study is characterized by the option to decide (un-) ethically to enhance the business purpose (in the social context, a social purpose, in the commercial context, a profit-maximization purpose). To investigate which characteristics of the context –and specifically of a trade-off – lead individuals to disregard and override their ethical values for a “greater good”, we design a conjoint analysis. This approach allows us to vary the attributes and scenarios and to test which attributes of a trade-off increase the probability of making an unethical choice. We add survey data to examine the individual propensity to morally disengage as an influencing factor to prefer certain attributes. Currently, we are in the final process of designing the conjoint analysis and plan to conduct the study by December 2022. We contribute to a better understanding of the role of moral disengagement in individual decision-making in a (social) entrepreneurial trade-off.Keywords: moral disengagement, social entrepreneurship, unethical decision, conjoint analysis
Procedia PDF Downloads 8936196 Analysis of Speaking Skills in Turkish Language Acquisition as a Foreign Language
Authors: Lokman Gozcu, Sule Deniz Gozcu
Abstract:
This study aims to analyze the skills of speaking in the acquisition of Turkish as a foreign language. One of the most important things for the individual who learns a foreign language is to be successful in the oral communication (speaking) skills and to interact in an understandable way. Speech skill requires much more time and effort than other language skills. In this direction, it is necessary to make an analysis of these oral communication skills, which is important in Turkish language acquisition as a foreign language and to draw out a road map according to the result. The aim of this study is to determine the competence and attitudes of speaking competence according to the individuals who learn Turkish as a foreign language and to be considered as speaking skill elements; Grammar, emphasis, intonation, body language, speed, ranking, accuracy, fluency, pronunciation, etc. and the results and suggestions based on these determinations. A mixed method has been chosen for data collection and analysis. A Likert scale (for competence and attitude) was applied to 190 individuals who were interviewed face-to-face (for speech skills) with a semi-structured interview form about 22 participants randomly selected. In addition, the observation form related to the 22 participants interviewed were completed by the researcher during the interview, and after the completion of the collection of all the voice recordings, analyses of voice recordings with the speech skills evaluation scale was made. The results of the research revealed that the speech skills of the individuals who learned Turkish as a foreign language have various perspectives. According to the results, the most inadequate aspects of the participants' ability to speak in Turkish include vocabulary, using humorous elements while speaking Turkish, being able to include items such as idioms and proverbs while speaking Turkish, Turkish fluency respectively. In addition, the participants were found not to feel comfortable while speaking Turkish, to feel ridiculous and to be nervous while speaking in formal settings. There are conclusions and suggestions for the situations that arise after the have been analyses made.Keywords: learning Turkish as a foreign language, proficiency criteria, phonetic (modalities), speaking skills
Procedia PDF Downloads 24836195 Mapping of Adrenal Gland Diseases Research in Middle East Countries: A Scientometric Analysis, 2007-2013
Authors: Zahra Emami, Mohammad Ebrahim Khamseh, Nahid Hashemi Madani, Iman Kermani
Abstract:
The aim of the study was to map scientific research on adrenal gland diseases in the Middle East countries through the Web of Science database using scientometric analysis. Data were analyzed with Excel software; and HistCite was used for mapping of the scientific texts. In this study, from a total of 268 retrieved records, 1125 authors from 328 institutions published their texts in 138 journals. Among 17 Middle East countries, Turkey ranked first with 164 documents (61.19%), Israel ranked second with 47 documents (15.53%) and Iran came in the third place with 26 documents. Most of the publications (185 documents, 69.2%) were articles. Among the universities of the Middle East, Istanbul University had the highest science production rate (9.7%). The Journal of Clinical Endocrinology & Metabolism had the highest TGCS (243 citations). In the scientific mapping, 7 clusters were formed based on TLCS (Total Local Citation Score) & TGCS (Total Global Citation Score). considering the study results, establishment of scientific connections and collaboration with other countries and use of publications on adrenal gland diseases from high ranking universities can help in the development of this field and promote the medical practice in this regard. Moreover, investigation of the formed clusters in relation to Congenital Hyperplasia and puberty related disorders can be research priorities for investigators.Keywords: mapping, scientific research, adrenal gland diseases, scientometric
Procedia PDF Downloads 27636194 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)
Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton
Abstract:
Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.Keywords: cold-start learning, expectation propagation, multi-armed bandits, Thompson Sampling, variational inference
Procedia PDF Downloads 11136193 Heating Demand Reduction in Single Family Houses Community through Home Energy Management: Putting Users in Charge
Authors: Omar Shafqat, Jaime Arias, Cristian Bogdan, Björn Palm
Abstract:
Heating constitutes a major part of the overall energy consumption in Sweden. In 2013 heating and hot water accounted for about 55% of the total energy use in the housing sector. Historically, the end users have not been able to make a significant impact on their consumption on account of traditional control systems that do not facilitate interaction and control of the heating systems. However, in recent years internet connected home energy management systems have become increasingly available which allow users to visualize the indoor temperatures as well as control the heating system. However, the adoption of these systems is still in its nascent stages. This paper presents the outcome of a study carried out in a community of single-family houses in Stockholm. Heating in the area is provided through district heating, and the neighbourhood is connected through a local micro thermal grid, which is owned and operated by the local community. Heating in the houses is accomplished through a hydronic system equipped with radiators. The system installed offers the households to control the indoor temperature through a mobile application as well as through a physical thermostat. It was also possible to program the system to, for instance, lower the temperatures during night time and when the users were away. The users could also monitor the indoor temperatures through the application. It was additionally possible to create different zones in the house with their own individual programming. The historical heating data (in the form of billing data) was available for several previous years and has been used to perform quantitative analysis for the study after necessary normalization for weather variations. The experiment involved 30 households out of a community of 178 houses. The area was selected due to uniform construction profile in the area. It was observed that despite similar design and construction period there was a large variation in the heating energy consumption in the area which can for a large part be attributed to user behaviour. The paper also presents qualitative analysis done through survey questions as well as a focus group carried out with the participants. Overall, considerable energy savings were accomplished during the trial, however, there was a considerable variation between the participating households. The paper additionally presents recommendations to improve the impact of home energy management systems for heating in terms of improving user engagement and hence the energy impact.Keywords: energy efficiency in buildings, energy behavior, heating control system, home energy management system
Procedia PDF Downloads 17536192 Effect of Poultry Manure and Nitrogen, Phosphorus, and Potassium (15:15:15) Soil Amendment on Growth and Yield of Carrot (Daucus carota)
Authors: Benjamin Osae Agyei, Hypolite Bayor
Abstract:
This present experiment was carried out during the 2012 cropping season, at the Farming for the Future Experimental Field of the University for Development Studies, Nyankpala Campus in the Northern Region of Ghana. The objective of the experiment was to determine the carrot growth and yield responses to poultry manure and N.P.K (15:15:15). Six treatments (Control (no amendment), 20 t/ha poultry manure (PM), 40 t/ha PM, 70 t/ha PM, 35 t/ha PM + 0.11t/ha N.P.K and 0.23 t/ha N.P.K) with three replications for each were laid in a Randomized Complete Block Design (RCBD). Data were collected on plant height, number of leaves per plant, canopy spread, root diameter, root weight, and root length. Microsoft Excel and Genstat Statistical Package (9th edition) were used for the data analysis. The treatment means were compared by using Least Significant Difference at 10%. Generally, the results showed that there were no significant differences (P>0.1) among the treatments with respect to number of leaves per plant, root diameter, root weight, and root length. However, significant differences occurred among plant heights and canopy spreads. Plant height treated with 40 t/ha PM at the fourth week after planting and canopy spread at eight weeks after planting and ten weeks after planting by 70 t/ha PM and 20 t/ha PM respectively showed significant difference (P<0.1). The study recommended that any of the amended treatments can be applied at their recommended rates to plots for carrot production, since there were no significant differences among the treatments.Keywords: poultry manure, N.P.K., soil amendment, growth, yield, carrot
Procedia PDF Downloads 47636191 Morphological Analysis of Manipuri Language: Wahei-Neinarol
Authors: Y. Bablu Singh, B. S. Purkayashtha, Chungkham Yashawanta Singh
Abstract:
Morphological analysis forms the basic foundation in NLP applications including syntax parsing Machine Translation (MT), Information Retrieval (IR) and automatic indexing in all languages. It is the field of the linguistics; it can provide valuable information for computer based linguistics task such as lemmatization and studies of internal structure of the words. Computational Morphology is the application of morphological rules in the field of computational linguistics, and it is the emerging area in AI, which studies the structure of words, which are formed by combining smaller units of linguistics information, called morphemes: the building blocks of words. Morphological analysis provides about semantic and syntactic role in a sentence. It analyzes the Manipuri word forms and produces several grammatical information associated with the words. The Morphological Analyzer for Manipuri has been tested on 3500 Manipuri words in Shakti Standard format (SSF) using Meitei Mayek as source; thereby an accuracy of 80% has been obtained on a manual check.Keywords: morphological analysis, machine translation, computational morphology, information retrieval, SSF
Procedia PDF Downloads 32836190 Critical Behaviour and Filed Dependence of Magnetic Entropy Change in K Doped Manganites Pr₀.₈Na₀.₂−ₓKₓMnO₃ (X = .10 And .15)
Authors: H. Ben Khlifa, W. Cheikhrouhou-Koubaa, A. Cheikhrouhou
Abstract:
The orthorhombic Pr₀.₈Na₀.₂−ₓKₓMnO₃ (x = 0.10 and 0.15) manganites are prepared by using the solid-state reaction at high temperatures. The critical exponents (β, γ, δ) are investigated through various techniques such as modified Arrott plot, Kouvel-Fisher method, and critical isotherm analysis based on the data of the magnetic measurements recorded around the Curie temperature. The critical exponents are derived from the magnetization data using the Kouvel-Fisher method, are found to be β = 0.32(4) and γ = 1.29(2) at TC ~ 123 K for x = 0.10 and β = 0.31(1) and γ = 1.25(2) at TC ~ 133 K for x = 0.15. The critical exponent values obtained for both samples are comparable to the values predicted by the 3D-Ising model and have also been verified by the scaling equation of state. Such results demonstrate the existence of ferromagnetic short-range order in our materials. The magnetic entropy changes of polycrystalline samples with a second-order phase transition are investigated. A large magnetic entropy change deduced from isothermal magnetization curves, is observed in our samples with a peak centered on their respective Curie temperatures (TC). The field dependence of the magnetic entropy changes are analyzed, which shows power-law dependence ΔSmax ≈ a(μ0 H)n at the transition temperature. The values of n obey the Curie Weiss law above the transition temperature. It is shown that for the investigated materials, the magnetic entropy change follows a master curve behavior. The rescaled magnetic entropy change curves for different applied fields collapse onto a single curve for both samples.Keywords: manganites, critical exponents, magnetization, magnetocaloric, master curve
Procedia PDF Downloads 16836189 A Fundamental Study for Real-Time Safety Evaluation System of Landing Pier Using FBG Sensor
Authors: Heungsu Lee, Youngseok Kim, Jonghwa Yi, Chul Park
Abstract:
A landing pier is subjected to safety assessment by visual inspection and design data, but it is difficult to check the damage in real-time. In this study, real - time damage detection and safety evaluation methods were studied. As a result of structural analysis of the arbitrary landing pier structure, the inflection point of deformation and moment occurred at 10%, 50%, and 90% of pile length. The critical value of Fiber Bragg Grating (FBG) sensor was set according to the safety factor, and the FBG sensor application method for real - time safety evaluation was derived.Keywords: FBG sensor, harbor structure, maintenance, safety evaluation system
Procedia PDF Downloads 22336188 A Collaborative, Arts-Informed Action Research Investigation of Child-Led Assessment
Authors: Dragana Gnjatovic
Abstract:
Assessment is a burning topic in education policy and practice due to measurement-driven neoliberal agendas of quality and standardisation of assessment practice through high stakes standardised testing systems that are now influencing early childhood education. This paper presents a collaborative, arts-informed action research project which places children at the centre of their learning, with assessment as an integral part of play-based learning processes. It aims to challenge traditional approaches to assessment that are often teacher-led and decontextualised from the processes of learning through exploring approaches where children's voices are central, and their creative arts expressions are used to assess learning and development. The theoretical framework draws on Vygotsky's sociocultural theory and Freire's critical pedagogy, which indicate the importance of socially constructed reality where knowledge is the result of collaboration between children and adults. This reality perceives children as competent agents of their own learning processes. An interpretive-constructivist and critical-transformative paradigm underpin collaborative action research in a three to five-year-old setting, where creative methods like storytelling, play, drama, drawing are used to assess children's learning. As data collection and analysis are still in process, this paper will present the methodology and some data vignettes, with the aim of stimulating discussion about innovation in assessment and contribution of the collaborative enquiry in the field of Early Childhood Education and Care.Keywords: assessment for learning, creative methodologies, collaborative action research, early childhood education and care
Procedia PDF Downloads 14036187 A Discrete Element Method Centrifuge Model of Monopile under Cyclic Lateral Loads
Authors: Nuo Duan, Yi Pik Cheng
Abstract:
This paper presents the data of a series of two-dimensional Discrete Element Method (DEM) simulations of a large-diameter rigid monopile subjected to cyclic loading under a high gravitational force. At present, monopile foundations are widely used to support the tall and heavy wind turbines, which are also subjected to significant from wind and wave actions. A safe design must address issues such as rotations and changes in soil stiffness subject to these loadings conditions. Design guidance on the issue is limited, so are the availability of laboratory and field test data. The interpretation of these results in sand, such as the relation between loading and displacement, relies mainly on empirical correlations to pile properties. Regarding numerical models, most data from Finite Element Method (FEM) can be found. They are not comprehensive, and most of the FEM results are sensitive to input parameters. The micro scale behaviour could change the mechanism of the soil-structure interaction. A DEM model was used in this paper to study the cyclic lateral loads behaviour. A non-dimensional framework is presented and applied to interpret the simulation results. The DEM data compares well with various set of published experimental centrifuge model test data in terms of lateral deflection. The accumulated permanent pile lateral displacements induced by the cyclic lateral loads were found to be dependent on the characteristics of the applied cyclic load, such as the extent of the loading magnitudes and directions.Keywords: cyclic loading, DEM, numerical modelling, sands
Procedia PDF Downloads 32436186 Image Processing of Scanning Electron Microscope Micrograph of Ferrite and Pearlite Steel for Recognition of Micro-Constituents
Authors: Subir Gupta, Subhas Ganguly
Abstract:
In this paper, we demonstrate the new area of application of image processing in metallurgical images to develop the more opportunity for structure-property correlation based approaches of alloy design. The present exercise focuses on the development of image processing tools suitable for phrase segmentation, grain boundary detection and recognition of micro-constituents in SEM micrographs of ferrite and pearlite steels. A comprehensive data of micrographs have been experimentally developed encompassing the variation of ferrite and pearlite volume fractions and taking images at different magnification (500X, 1000X, 15000X, 2000X, 3000X and 5000X) under scanning electron microscope. The variation in the volume fraction has been achieved using four different plain carbon steel containing 0.1, 0.22, 0.35 and 0.48 wt% C heat treated under annealing and normalizing treatments. The obtained data pool of micrographs arbitrarily divided into two parts to developing training and testing sets of micrographs. The statistical recognition features for ferrite and pearlite constituents have been developed by learning from training set of micrographs. The obtained features for microstructure pattern recognition are applied to test set of micrographs. The analysis of the result shows that the developed strategy can successfully detect the micro constitutes across the wide range of magnification and variation of volume fractions of the constituents in the structure with an accuracy of about +/- 5%.Keywords: SEM micrograph, metallurgical image processing, ferrite pearlite steel, microstructure
Procedia PDF Downloads 20436185 Error Analysis of English Inflection among Thai University Students
Authors: Suwaree Yordchim, Toby J. Gibbs
Abstract:
The linguistic competence of Thai university students majoring in Business English was examined in the context of knowledge of English language inflection, and also various linguistic elements. Errors analysis was applied to the results of the testing. Levels of errors in inflection, tense and linguistic elements were shown to be significantly high for all noun, verb and adjective inflections. Findings suggest that students do not gain linguistic competence in their use of English language inflection, because of interlanguage interference. Implications for curriculum reform and treatment of errors in the classroom are discussed.Keywords: interlanguage, error analysis, inflection, second language acquisition, Thai students
Procedia PDF Downloads 471