Search results for: intuitionistic fuzzy entropy measure
3303 Prediction of Formation Pressure Using Artificial Intelligence Techniques
Authors: Abdulmalek Ahmed
Abstract:
Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)
Procedia PDF Downloads 1493302 Correlation between Diabetic Cataract, HBA1C and Gurakhu, a Clinical Study in Chhattisgarh State
Authors: A. Bhattacharya, Sanjay Gupta, S. H. Bodakhe
Abstract:
HbA1c is form of the haemoglobin that is used to measure the average plasma glucose concentration over prolonged periods of time. It is formed in a non-enzymatic glycation pathway by hemoglobin's exposure to plasma glucose. In diabetes mellitus, higher amounts of glycated hemoglobin, indicating poorer control of blood glucose levels, have been associated with cardiovascular disease, nephropathy, and retinopathy. Guraku’s basic components are nicotine and jaggery, jaggery is made up of sugarcane so can have a diabetogenic potential which is exacerbated in presence of nicotine. This work had done with the aim to find correlation between Diabetic cataract, HbA1c and Guraku. Subjects were enrolled according to the inclusion and exclusion criteria. In this study total 75 subjects were included. In the study it was found that people consuming Guraku had a high level of HbA1c thus are more prone to the development of diabetic cataract. Male subjects are the more than female subjects. Most of the subjects belong to the lower socioeconomical class and not very educated. It could be concluded that this type of study could be useful in indentifying number of subjects suffering from diabetic cataract whose condition get worse by use of nicotine product like Guraku and preventive measure to be taken in prevention of this type of diabetic complication.Keywords: diabetic cataract, HbA1c, Guraku, diabetogenic potential
Procedia PDF Downloads 4003301 The Impact of Voluntary Disclosure Level on the Cost of Equity Capital in Tunisian's Listed Firms
Authors: Nouha Ben Salah, Mohamed Ali Omri
Abstract:
This paper treats the association between disclosure level and the cost of equity capital in Tunisian’slisted firms. This relation is tested by using two models. The first is used for testing this relation directly by regressing firm specific estimates of cost of equity capital on market beta, firm size and a measure of disclosure level. The second model is used for testing this relation by introducing information asymmetry as mediator variable. This model is suggested by Baron and Kenny (1986) to demonstrate the role of mediator variable in general. Based on a sample of 21 non-financial Tunisian’s listed firms over a period from 2000 to 2004, the results prove that greater disclosure is associated with a lower cost of equity capital. However, the results of indirect relationship indicate a significant positive association between the level of voluntary disclosure and information asymmetry and a significant negative association between information asymmetry and cost of equity capital in contradiction with our previsions. Perhaps this result is due to the biases of measure of information asymmetry.Keywords: cost of equity capital, voluntary disclosure, information asymmetry, and Tunisian’s listed non-financial firms
Procedia PDF Downloads 5173300 Controversies and Contradiction in (IR) Reversibility and the Equilibrium of Reactive Systems
Authors: Joao Teotonio Manzi
Abstract:
Reversibility, irreversibility, equilibrium and steady-state that play a central role in the thermodynamic analysis of processes arising in the context of reactive systems are discussed in this article. Such concepts have generated substantial doubts, even among the most experienced researchers, and engineers, because from the literature, conclusive or definitive statements cannot be extracted. Concepts such as the time-reversibility of irreversible processes seem paradoxical, requiring further analysis. Equilibrium and reversibility, which appear to be of the same nature, have also been re-examined in the light of maximum entropy. The goal of this paper is to revisit and explore these concepts based on classical thermodynamics in order to have a better understanding them due to their impacts on technological advances, as a result, to generate an optimal procedure for designing, monitoring, and engineering optimization. Furthermore, an effective graphic procedure for dimensioning a Plug Flow Reactor has been provided. Thus, to meet the needs of chemical engineering from a simple conceptual analysis but with significant practical effects, a macroscopic approach is taken so as to integrate the different parts of this paper.Keywords: reversibility, equilibrium, steady-state, thermodynamics, reactive system
Procedia PDF Downloads 1063299 Pupil Size: A Measure of Identification Memory in Target Present Lineups
Authors: Camilla Elphick, Graham Hole, Samuel Hutton, Graham Pike
Abstract:
Pupil size has been found to change irrespective of luminosity, suggesting that it can be used to make inferences about cognitive processes, such as cognitive load. To see whether identifying a target requires a different cognitive load to rejecting distractors, the effect of viewing a target (compared with viewing distractors) on pupil size was investigated using a sequential video lineup procedure with two lineup sessions. Forty one participants were chosen randomly via the university. Pupil sizes were recorded when viewing pre target distractors and post target distractors and compared to pupil size when viewing the target. Overall, pupil size was significantly larger when viewing the target compared with viewing distractors. In the first session, pupil size changes were significantly different between participants who identified the target (Hits) and those who did not. Specifically, the pupil size of Hits reduced significantly after viewing the target (by 26%), suggesting that cognitive load reduced following identification. The pupil sizes of Misses (who made no identification) and False Alarms (who misidentified a distractor) did not reduce, suggesting that the cognitive load remained high in participants who failed to make the correct identification. In the second session, pupil sizes were smaller overall, suggesting that cognitive load was smaller in this session, and there was no significant difference between Hits, Misses and False Alarms. Furthermore, while the frequency of Hits increased, so did False Alarms. These two findings suggest that the benefits of including a second session remain uncertain, as the second session neither provided greater accuracy nor a reliable way to measure it. It is concluded that pupil size is a measure of face recognition strength in the first session of a target present lineup procedure. However, it is still not known whether cognitive load is an adequate explanation for this, or whether cognitive engagement might describe the effect more appropriately. If cognitive load and cognitive engagement can be teased apart with further investigation, this would have positive implications for understanding eyewitness identification. Nevertheless, this research has the potential to provide a tool for improving the reliability of lineup procedures.Keywords: cognitive load, eyewitness identification, face recognition, pupillometry
Procedia PDF Downloads 4043298 Mitigation of Indoor Human Exposure to Traffic-Related Fine Particulate Matter (PM₂.₅)
Authors: Ruchi Sharma, Rajasekhar Balasubramanian
Abstract:
Motor vehicles emit a number of air pollutants, among which fine particulate matter (PM₂.₅) is of major concern in cities with high population density due to its negative impacts on air quality and human health. Typically, people spend more than 80% of their time indoors. Consequently, human exposure to traffic-related PM₂.₅ in indoor environments has received considerable attention. Most of the public residential buildings in tropical countries are designed for natural ventilation where indoor air quality tends to be strongly affected by the migration of air pollutants of outdoor origin. However, most of the previously reported traffic-related PM₂.₅ exposure assessment studies relied on ambient PM₂.₅ concentrations and thus, the health impact of traffic-related PM₂.₅ on occupants in naturally ventilated buildings remains largely unknown. Therefore, a systematic field study was conducted to assess indoor human exposure to traffic-related PM₂.₅ with and without mitigation measures in a typical naturally ventilated residential apartment situated near a road carrying a large volume of traffic. Three PM₂.₅ exposure scenarios were simulated in this study, i.e., Case 1: keeping all windows open with a ceiling fan on as per the usual practice, Case 2: keeping all windows fully closed as a mitigation measure, and Case 3: keeping all windows fully closed with the operation of a portable indoor air cleaner as an additional mitigation measure. The indoor to outdoor (I/O) ratios for PM₂.₅ mass concentrations were assessed and the effectiveness of using the indoor air cleaner was quantified. Additionally, potential human health risk based on the bioavailable fraction of toxic trace elements was also estimated for the three cases in order to identify a suitable mitigation measure for reducing PM₂.₅ exposure indoors. Traffic-related PM₂.₅ levels indoors exceeded the air quality guidelines (12 µg/m³) in Case 1, i.e., under natural ventilation conditions due to advective flow of outdoor air into the indoor environment. However, while using the indoor air cleaner, a significant reduction (p < 0.05) in the PM₂.₅ exposure levels was noticed indoors. Specifically, the effectiveness of the air cleaner in terms of reducing indoor PM₂.₅ exposure was estimated to be about 74%. Moreover, potential human health risk assessment also indicated a substantial reduction in potential health risk while using the air cleaner. This is the first study of its kind that evaluated the indoor human exposure to traffic-related PM₂.₅ and identified a suitable exposure mitigation measure that can be implemented in densely populated cities to realize health benefits.Keywords: fine particulate matter, indoor air cleaner, potential human health risk, vehicular emissions
Procedia PDF Downloads 1263297 Influence of Parenting Styles on Adolescents Self-Esteem
Authors: Olubukola Ajayi
Abstract:
INFLUENCE OF PARANTING STYLES ON ADOLECENST’ SELF-ESTEEM BY AJAYI OLUBUKOLA (PhD) DEPARTMRNT OF PSYCHOLOGY & BEHAVIOURAL STUDIES FACULTY OF THE SOCIAL SCIENCES EKITI STATE UNIVERSITY [email protected] , [email protected] ABSTRACT This study was designed to assess the influence of parenting styles on adolescents’ self-esteem. The study population comprised of adolescents selected from two secondary schools (Bishop Philip Academy and T.L. Oyeshina Model School) in Ibadan, Oyo State. A total number of 300 students whose ages range from 13-19 were used. Rosenberg Self-Esteem Scale (RSE) developed by Rosenberg (1965) was used to measure Adolescents’ Self-Esteem. While Robinson (2001) Parenting Styles and Dimensions Questionnaire was used to measure how adolescents perceive their parents on four sub scales (Authoritative, Authoritarian, Uninvolved, Permissiveness).Three hypotheses were tested using Regression, One way ANOVA and Independent t-test. Result showed that parenting styles do not significantly influence Self-Esteem. Parents Work Status did not have a significant influence on Self-Esteem. The result also revealed no significant difference between Male and Female Self-Esteem. The findings were discussed in line with relevant empirical literatures, while conclusions and recommendations subsequently followed. Key Words: Parenting style, Adolescents, Self- esteem,Keywords: AUTHORITATIVE, AUTHORITARIAN, UNINVOLVED, PERMISSIVENESS
Procedia PDF Downloads 1973296 Measuring Firms’ Patent Management: Conceptualization, Validation, and Interpretation
Authors: Mehari Teshome, Lara Agostini, Anna Nosella
Abstract:
The current knowledge-based economy extends intellectual property rights (IPRs) legal research themes into a more strategic and organizational perspectives. From the diverse types of IPRs, patents are the strongest and well-known form of legal protection that influences commercial success and market value. Indeed, from our pilot survey, we understood that firms are less likely to manage their patents and actively used it as a tool for achieving competitive advantage rather they invest resource and efforts for patent application. To this regard, the literature also confirms that insights into how firms manage their patents from a holistic, strategic perspective, and how the portfolio value of patents can be optimized are scarce. Though patent management is an important business tool and there exist few scales to measure some dimensions of patent management, at the best of our knowledge, no systematic attempt has been made to develop a valid and comprehensive measure of it. Considering this theoretical and practical point of view, the aim of this article is twofold: to develop a framework for patent management encompassing all relevant dimensions with their respective constructs and measurement items, and to validate the measurement using survey data from practitioners. Methodology: We used six-step methodological approach (i.e., specify the domain of construct, item generation, scale purification, internal consistency assessment, scale validation, and replication). Accordingly, we carried out a systematic review of 182 articles on patent management, from ISI Web of Science. For each article, we mapped relevant constructs, their definition, and associated features, as well as items used to measure these constructs, when provided. This theoretical analysis was complemented by interviews with experts in patent management to get feedbacks that are more practical on how patent management is carried out in firms. Afterwards, we carried out a questionnaire survey to purify our scales and statistical validation. Findings: The analysis allowed us to design a framework for patent management, identifying its core dimensions (i.e., generation, portfolio-management, exploitation and enforcement, intelligence) and support dimensions (i.e., strategy and organization). Moreover, we identified the relevant activities for each dimension, as well as the most suitable items to measure them. For example, the core dimension generation includes constructs as: state-of-the-art analysis, freedom-to-operate analysis, patent watching, securing freedom-to-operate, patent potential and patent-geographical-scope. Originality and the Study Contribution: This study represents a first step towards the development of sound scales to measure patent management with an overarching approach, thus laying the basis for developing a recognized landmark within the research area of patent management. Practical Implications: The new scale can be used to assess the level of sophistication of the patent management of a company and compare it with other firms in the industry to evaluate their ability to manage the different activities involved in patent management. In addition, the framework resulting from this analysis can be used as a guide that supports managers to improve patent management in firms.Keywords: patent, management, scale, development, intellectual property rights (IPRs)
Procedia PDF Downloads 1473295 A Ratio-Weighted Decision Tree Algorithm for Imbalance Dataset Classification
Authors: Doyin Afolabi, Phillip Adewole, Oladipupo Sennaike
Abstract:
Most well-known classifiers, including the decision tree algorithm, can make predictions on balanced datasets efficiently. However, the decision tree algorithm tends to be biased towards imbalanced datasets because of the skewness of the distribution of such datasets. To overcome this problem, this study proposes a weighted decision tree algorithm that aims to remove the bias toward the majority class and prevents the reduction of majority observations in imbalance datasets classification. The proposed weighted decision tree algorithm was tested on three imbalanced datasets- cancer dataset, german credit dataset, and banknote dataset. The specificity, sensitivity, and accuracy metrics were used to evaluate the performance of the proposed decision tree algorithm on the datasets. The evaluation results show that for some of the weights of our proposed decision tree, the specificity, sensitivity, and accuracy metrics gave better results compared to that of the ID3 decision tree and decision tree induced with minority entropy for all three datasets.Keywords: data mining, decision tree, classification, imbalance dataset
Procedia PDF Downloads 1373294 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria
Authors: Tomola Obamuyi
Abstract:
The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression
Procedia PDF Downloads 1203293 Improving Screening and Treatment of Binge Eating Disorders in Pediatric Weight Management Clinic through a Quality Improvement Framework
Authors: Cristina Fernandez, Felix Amparano, John Tumberger, Stephani Stancil, Sarah Hampl, Brooke Sweeney, Amy R. Beck, Helena H Laroche, Jared Tucker, Eileen Chaves, Sara Gould, Matthew Lindquist, Lora Edwards, Renee Arensberg, Meredith Dreyer, Jazmine Cedeno, Alleen Cummins, Jennifer Lisondra, Katie Cox, Kelsey Dean, Rachel Perera, Nicholas A. Clark
Abstract:
Background: Adolescents with obesity are at higher risk of disordered eating than the general population. Detection of eating disorders (ED) is difficult. Screening questionnaires may aid in early detection of ED. Our team’s prior efforts focused on increasing ED screening rates to ≥90% using a validated 10-question adolescent binge eating disorder screening questionnaire (ADO-BED). This aim was achieved. We then aimed to improve treatment plan initiation of patients ≥12 years of age who screen positive for BED within our WMC from 33% to 70% within 12 months. Methods: Our WMC is within a tertiary-care, free-standing children’s hospital. A3, an improvement framework, was used. A multidisciplinary team (physicians, nurses, registered dietitians, psychologists, and exercise physiologists) was created. The outcome measure was documentation of treatment plan initiation of those who screen positive (goal 70%). The process measure was ADO-BED screening rate of WMC patients (goal ≥90%). Plan-Do-Study-Act (PDSA) cycle 1 included provider education on current literature and treatment plan initiation based upon ADO-BED responses. PDSA 2 involved increasing documentation of treatment plan and retrain process to providers. Pre-defined treatment plans were: 1) repeat screen in 3-6 months, 2) resources provided only, or 3) comprehensive multidisciplinary weight management team evaluation. Run charts monitored impact over time. Results: Within 9 months, 166 patients were seen in WMC. Process measure showed sustained performance above goal (mean 98%). Outcome measure showed special cause improvement from mean of 33% to 100% (n=31). Of treatment plans provided, 45% received Plan 1, 4% Plan 2, and 46% Plan 3. Conclusion: Through a multidisciplinary improvement team approach, we maintained sustained ADO-BED screening performance, and, prior to our 12-month timeline, achieved our project aim. Our efforts may serve as a model for other multidisciplinary WMCs. Next steps may include expanding project scope to other WM programs.Keywords: obesity, pediatrics, clinic, eating disorder
Procedia PDF Downloads 633292 Impacts on Marine Ecosystems Using a Multilayer Network Approach
Authors: Nelson F. F. Ebecken, Gilberto C. Pereira, Lucio P. de Andrade
Abstract:
Bays, estuaries and coastal ecosystems are some of the most used and threatened natural systems globally. Its deterioration is due to intense and increasing human activities. This paper aims to monitor the socio-ecological in Brazil, model and simulate it through a multilayer network representing a DPSIR structure (Drivers, Pressures, States-Impacts-Responses) considering the concept of Management based on Ecosystems to support decision-making under the National/State/Municipal Coastal Management policy. This approach considers several interferences and can represent a significant advance in several scientific aspects. The main objective of this paper is the coupling of three different types of complex networks, the first being an ecological network, the second a social network, and the third a network of economic activities, in order to model the marine ecosystem. Multilayer networks comprise two or more "layers", which may represent different types of interactions, different communities, different points in time, and so on. The dependency between layers results from processes that affect the various layers. For example, the dispersion of individuals between two patches affects the network structure of both samples. A multilayer network consists of (i) a set of physical nodes representing entities (e.g., species, people, companies); (ii) a set of layers, which may include multiple layering aspects (e.g., time dependency and multiple types of relationships); (iii) a set of state nodes, each of which corresponds to the manifestation of a given physical node in a layer-specific; and (iv) a set of edges (weighted or not) to connect the state nodes among themselves. The edge set includes the intralayer edges familiar and interlayer ones, which connect state nodes between layers. The applied methodology in an existent case uses the Flow cytometry process and the modeling of ecological relationships (trophic and non-trophic) following fuzzy theory concepts and graph visualization. The identification of subnetworks in the fuzzy graphs is carried out using a specific computational method. This methodology allows considering the influence of different factors and helps their contributions to the decision-making process.Keywords: marine ecosystems, complex systems, multilayer network, ecosystems management
Procedia PDF Downloads 1133291 Social Media Use and Social Connectedness
Authors: Jessica Torres, James W. Sturges
Abstract:
This correlational study explored the potential relationship between social media use and social connectedness. College students (n = 190) were surveyed using the revised Social Connectedness Scale (SCS-R) and were asked about the number of hours they used social media platforms such as Instagram, TikTok, Twitter, Snapchat, and Facebook. We also developed and administered a 14-item Social Media Use Scale (SMUS) to measure potentially maladaptive social media use, such as use that likely interfered with other activities. The SMUS was found to have good inter-item consistency (Cronbach’s alpha = .92) and was significantly correlated with hours of use, r(182) = .622, p < .001. As expected, we found that the SCS-R scores were inversely related to total hours of social media use, r(182) = -.188 (p < .005). This suggested that lots of time allocated to online interactions is negatively associated with social connectedness in general. Interestingly, however, higher social connectedness scores were associated specifically with Snapchat use, r(28) = .210, p = .004. This may have to do with the specific nature of the Snapchat experience and perhaps its original use for one-to-one communication. The use of other social media platforms (Tiktok, Instagram, Twitter) was not related to better social connectedness scores. Although we failed to find that scores on our measure of problem use (the SMUS) were correlated with social connectedness, we are hopeful that the SMUS will be of use in identifying patterns of maladaptive social media use that may have an impact on other important outcome measures of adaptive functioning and well-being.Keywords: adaptive functioning, college students, social connectedness, social media use
Procedia PDF Downloads 953290 A Network-Theorical Perspective on Music Analysis
Authors: Alberto Alcalá-Alvarez, Pablo Padilla-Longoria
Abstract:
The present paper describes a framework for constructing mathematical networks encoding relevant musical information from a music score for structural analysis. These graphs englobe statistical information about music elements such as notes, chords, rhythms, intervals, etc., and the relations among them, and so become helpful in visualizing and understanding important stylistic features of a music fragment. In order to build such networks, musical data is parsed out of a digital symbolic music file. This data undergoes different analytical procedures from Graph Theory, such as measuring the centrality of nodes, community detection, and entropy calculation. The resulting networks reflect important structural characteristics of the fragment in question: predominant elements, connectivity between them, and complexity of the information contained in it. Music pieces in different styles are analyzed, and the results are contrasted with the traditional analysis outcome in order to show the consistency and potential utility of this method for music analysis.Keywords: computational musicology, mathematical music modelling, music analysis, style classification
Procedia PDF Downloads 1023289 Moving towards a General Definition of Public Happiness: A Grounded Theory Approach to the Recent Academic Research on Well-Being
Authors: Cristina Sanchez-Sanchez
Abstract:
Although there seems to be a growing interest in the study of the citizen’s happiness as an alternative measure of a country’s progress to GDP, happiness as a public concern is still an ambiguous concept, hard to define. Moreover, different notions are used indiscriminately to talk about the same thing. This investigation aims to determine the conceptions of happiness, well-being and quality of life that originate from the indexes that different governments and public institutions around the world have created to study them. Through the Scoping Review method, this study identifies the recent academic research in this field (a total of 267 documents between 2006 and 2016) from some of the most popular social sciences databases around the world, Web of Science, Scopus, JSTOR, Sage, EBSCO, IBSS and Google Scholar, and in Spain, ISOC and Dialnet. These 267 documents referenced 53 different indexes and researches. The Grounded Theory method has been applied to a sample of 13 indexes in order to identify the main categories they use to determine these three concepts. The results show that these are multi-dimensional concepts and similar indicators are used indistinctly to measure happiness, well-being and quality of life.Keywords: common good, grounded theory, happiness economics, happiness index, quality of life, scoping review, well-being
Procedia PDF Downloads 2793288 Wear Measuring and Wear Modelling Based On Archard, ASTM, and Neural Network Models
Authors: A. Shebani, C. Pislaru
Abstract:
Wear of materials is an everyday experience and has been observed and studied for long time. The prediction of wear is a fundamental problem in the industrial field, mainly correlated to the planning of maintenance interventions and economy. Pin-on-disc test is the most common test which is used to study the wear behaviour. In this paper, the pin-on-disc (AEROTECH UNIDEX 11) is used for the investigation of the effects of normal load and hardness of material on the wear under dry and sliding conditions. In the pin-on-disc rig, two specimens were used; one, a pin which is made of steel with a tip, is positioned perpendicular to the disc, where the disc is made of aluminium. The pin wear and disc wear were measured by using the following instruments: The Talysurf instrument, a digital microscope, and the alicona instrument; where the Talysurf profilometer was used to measure the pin/disc wear scar depth, and the alicona was used to measure the volume loss for pin and disc. After that, the Archard model, American Society for Testing and Materials model (ASTM), and neural network model were used for pin/disc wear modelling and the simulation results are implemented by using the Matlab program. This paper focuses on how the alicona can be considered as a powerful tool for wear measurements and how the neural network is an effective algorithm for wear estimation.Keywords: wear modelling, Archard Model, ASTM Model, Neural Networks Model, Pin-on-disc Test, Talysurf, digital microscope, Alicona
Procedia PDF Downloads 4563287 Implementation and Comparative Analysis of PET and CT Image Fusion Algorithms
Authors: S. Guruprasad, M. Z. Kurian, H. N. Suma
Abstract:
Medical imaging modalities are becoming life saving components. These modalities are very much essential to doctors for proper diagnosis, treatment planning and follow up. Some modalities provide anatomical information such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), X-rays and some provides only functional information such as Positron Emission Tomography (PET). Therefore, single modality image does not give complete information. This paper presents the fusion of structural information in CT and functional information present in PET image. This fused image is very much essential in detecting the stages and location of abnormalities and in particular very much needed in oncology for improved diagnosis and treatment. We have implemented and compared image fusion techniques like pyramid, wavelet, and principal components fusion methods along with hybrid method of DWT and PCA. The performances of the algorithms are evaluated quantitatively and qualitatively. The system is implemented and tested by using MATLAB software. Based on the MSE, PSNR and ENTROPY analysis, PCA and DWT-PCA methods showed best results over all experiments.Keywords: image fusion, pyramid, wavelets, principal component analysis
Procedia PDF Downloads 2843286 Measuring Enterprise Growth: Pitfalls and Implications
Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić
Abstract:
Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises
Procedia PDF Downloads 2523285 Clustering of Association Rules of ISIS & Al-Qaeda Based on Similarity Measures
Authors: Tamanna Goyal, Divya Bansal, Sanjeev Sofat
Abstract:
In world-threatening terrorist attacks, where early detection, distinction, and prediction are effective diagnosis techniques and for functionally accurate and precise analysis of terrorism data, there are so many data mining & statistical approaches to assure accuracy. The computational extraction of derived patterns is a non-trivial task which comprises specific domain discovery by means of sophisticated algorithm design and analysis. This paper proposes an approach for similarity extraction by obtaining the useful attributes from the available datasets of terrorist attacks and then applying feature selection technique based on the statistical impurity measures followed by clustering techniques on the basis of similarity measures. On the basis of degree of participation of attributes in the rules, the associative dependencies between the attacks are analyzed. Consequently, to compute the similarity among the discovered rules, we applied a weighted similarity measure. Finally, the rules are grouped by applying using hierarchical clustering. We have applied it to an open source dataset to determine the usability and efficiency of our technique, and a literature search is also accomplished to support the efficiency and accuracy of our results.Keywords: association rules, clustering, similarity measure, statistical approaches
Procedia PDF Downloads 3203284 Increase Daily Production Rate of Methane Through Pasteurization Cow Dung
Authors: Khalid Elbadawi Elshafea, Mahmoud Hassan Onsa
Abstract:
This paper presents the results of the experiments to measure the impact of pasteurization cows dung on important parameter of anaerobic digestion (retention time) and measure the effect in daily production rate of biogas, were used local materials in these experiments, two experiments were carried out in two bio-digesters (1 and 2) (18.0 L), volume of the mixture 16.0-litre and the mass of dry matter in the mixture 4.0 Kg of cow dung. Pasteurization process has been conducted on the mixture into the digester 2, and put two digesters under room temperature. Digester (1) produced 268.5 liter of methane in period of 49 days with daily methane production rate 1.37L/Kg/day, and digester (2) produced 302.7-liter of methane in period of 26 days with daily methane production rate 2.91 L/Kg/day. This study concluded that the use of system pasteurization cows dung speed up hydrolysis in anaerobic process, because heat to certain temperature in certain time lead to speed up chemical reactions (transfer Protein to Amino acids, Carbohydrate to Sugars and Fat to Long chain fatty acids), this lead to reduce the retention time an therefore increase the daily methane production rate with 212%.Keywords: methane, cow dung, daily production, pasteurization, increase
Procedia PDF Downloads 3093283 Synthesis and Characterization of Thiourea-Formaldehyde Coated Fe3O4 (TUF@Fe3O4) and Its Application for Adsorption of Methylene Blue
Authors: Saad M. Alshehri, Tansir Ahamad
Abstract:
Thiourea-Formaldehyde Pre-Polymer (TUF) was prepared by the reaction thiourea and formaldehyde in basic medium and used as a coating materials for magnetite Fe3O4. The synthesized polymer coated microspheres (TUF@Fe3O4) was characterized using FTIR, TGA SEM and TEM. Its BET surface area was up to 1680 m2 g_1. The adsorption capacity of this ACF product was evaluated in its adsorption of Methylene Blue (MB) in water under different pH values and different temperature. We found that the adsorption process was well described both by the Langmuir and Freundlich isotherm model. The kinetic processes of MB adsorption onto TUF@Fe3O4 were described in order to provide a more clear interpretation of the adsorption rate and uptake mechanism. The overall kinetic data was acceptably explained by a pseudo second-order rate model. Evaluated ∆Go and ∆Ho specify the spontaneous and exothermic nature of the reaction. The adsorption takes place with a decrease in entropy (∆So is negative). The monolayer capacity for MB was up to 450 mg g_1 and was one of the highest among similar polymeric products. It was due to its large BET surface area.Keywords: TGA, FTIR, magentite, thiourea formaldehyde resin, methylene blue, adsorption
Procedia PDF Downloads 3503282 Ischemic Stroke Detection in Computed Tomography Examinations
Authors: Allan F. F. Alves, Fernando A. Bacchim Neto, Guilherme Giacomini, Marcela de Oliveira, Ana L. M. Pavan, Maria E. D. Rosa, Diana R. Pina
Abstract:
Stroke is a worldwide concern, only in Brazil it accounts for 10% of all registered deaths. There are 2 stroke types, ischemic (87%) and hemorrhagic (13%). Early diagnosis is essential to avoid irreversible cerebral damage. Non-enhanced computed tomography (NECT) is one of the main diagnostic techniques used due to its wide availability and rapid diagnosis. Detection depends on the size and severity of lesions and the time spent between the first symptoms and examination. The Alberta Stroke Program Early CT Score (ASPECTS) is a subjective method that increases the detection rate. The aim of this work was to implement an image segmentation system to enhance ischemic stroke and to quantify the area of ischemic and hemorrhagic stroke lesions in CT scans. We evaluated 10 patients with NECT examinations diagnosed with ischemic stroke. Analyzes were performed in two axial slices, one at the level of the thalamus and basal ganglion and one adjacent to the top edge of the ganglionic structures with window width between 80 and 100 Hounsfield Units. We used different image processing techniques such as morphological filters, discrete wavelet transform and Fuzzy C-means clustering. Subjective analyzes were performed by a neuroradiologist according to the ASPECTS scale to quantify ischemic areas in the middle cerebral artery region. These subjective analysis results were compared with objective analyzes performed by the computational algorithm. Preliminary results indicate that the morphological filters actually improve the ischemic areas for subjective evaluations. The comparison in area of the ischemic region contoured by the neuroradiologist and the defined area by computational algorithm showed no deviations greater than 12% in any of the 10 examination tests. Although there is a tendency that the areas contoured by the neuroradiologist are smaller than those obtained by the algorithm. These results show the importance of a computer aided diagnosis software to assist neuroradiology decisions, especially in critical situations as the choice of treatment for ischemic stroke.Keywords: ischemic stroke, image processing, CT scans, Fuzzy C-means
Procedia PDF Downloads 3663281 Thermodynamics of Stable Micro Black Holes Production by Modeling from the LHC
Authors: Aref Yazdani, Ali Tofighi
Abstract:
We study a simulative model for production of stable micro black holes based on investigation on thermodynamics of LHC experiment. We show that how this production can be achieved through a thermodynamic process of stability. Indeed, this process can be done through a very small amount of powerful fuel. By applying the second law of black hole thermodynamics at the scale of quantum gravity and perturbation expansion of the given entropy function, a time-dependent potential function is obtained which is illustrated with exact numerical values in higher dimensions. Seeking for the conditions for stability of micro black holes is another purpose of this study. This is proven through an injection method of putting the exact amount of energy into the final phase of the production which is equivalent to the same energy injection into the center of collision at the LHC in order to stabilize the produced particles. Injection of energy into the center of collision at the LHC is a new pattern that it is worth a try for the first time.Keywords: micro black holes, LHC experiment, black holes thermodynamics, extra dimensions model
Procedia PDF Downloads 1443280 An iTunes U App for Development of Metacognition Skills Delivered in the Enrichment Program Offered to Gifted Students at the Secondary Level
Authors: Maha Awad M. Almuttairi
Abstract:
This research aimed to measure the impact of the use of a mobile learning (iTunes U) app for the development of metacognition skills delivered in the enrichment program offered to gifted students at the secondary level in Jeddah. The author targeted a group of students on an experimental scale to evaluate the achievement. The research sample consisted of a group of 38 gifted female students. The scale of evaluation of the metacognition skills used to measure the performance of students in the enrichment program was as follows: Satisfaction scale for the assessment of the technique used and the final product form after completion of the program. Appropriate statistical treatment used includes Paired Samples T-Test Cronbach’s alpha formula and eta squared formula. It was concluded in the results the difference of α≤ 0.05, which means the performance of students in the skills of metacognition in favor of using iTunes U. In light of the conclusion of the experiment, a number of recommendations and suggestions were present; the most important benefit of mobile learning applications is to provide enrichment programs for gifted students in the Kingdom of Saudi Arabia, as well as conducting further research on mobile learning and gifted student teaching.Keywords: enrichment program, gifted students, metacognition skills, mobile learning
Procedia PDF Downloads 1183279 Relationships between Financial, Cultural, Emotional, and General Wellbeing: A Structural Equation Modeling Study
Authors: Michael Alsop, Hannah Heitz, Prathiba Natesan Batley, Marion Hambrick, Jason Immekus
Abstract:
The impacts of cultural engagement on individuals’ health and well-being have been well documented. The purposes of this study were to create an instrument to measure wellbeing constructs, including cultural wellbeing, and explore the relationships between cultural wellbeing and other wellbeing constructs (e.g., emotional, social, physical, spiritual). A sample of 358 participants attending concerts performed by a civic orchestra in the southeastern United States completed a questionnaire designed to measure eight wellbeing constructs. Split-half exploratory, confirmatory factor analyses resulted in the retention of four wellbeing constructs: general, emotional, financial, and cultural. Structural equation modeling showed statistically significant relationships between cultural wellbeing and other wellbeing constructs. In addition to the indirect effect of financial wellbeing on emotional and general wellbeing through cultural wellbeing, there were also direct statistically significant relationships (i.e., moderator). This highlights the importance of removing financial barriers to cultural engagement and the relationship between cultural wellbeing on emotional and general wellbeing. Additionally, the retained cultural wellbeing items focused primarily on community features, indicating the value of community-based cultural engagement opportunities.Keywords: cultural wellbeing, cultural engagement, factor analysis, structural equation modeling
Procedia PDF Downloads 823278 Towards Establishing a Universal Theory of Project Management
Authors: Divine Kwaku Ahadzie
Abstract:
Project management (PM) as a concept has evolved from the early 20th Century into a recognized academic and professional discipline, and indications are that it has come to stay in the 21st Century as a world-wide paradigm shift for managing successful construction projects. However, notwithstanding the strong inroads that PM has made in legitimizing its academic and professional status in construction management practice, the underlining philosophies are still based on cases and conventional practices. An important theoretical issue yet to be addressed is the lack of a universal theory that offers philosophical legitimacy for the PM concept as a uniquely specialized management concept. Here, it is hypothesized that the law of entropy, the theory of uncertainties and the theory of risk management offer plausible explanations for addressing the lacuna of what constitute PM theory. The theoretical bases of these plausible underlying theories are argued and attempts made to establish the functional relationships that exist between these theories and the PM concept. The paper then draws on data related to the success and/or failure of a number of construction projects to validate the theory.Keywords: concepts, construction, project management, universal theory
Procedia PDF Downloads 3283277 Framework to Quantify Customer Experience
Authors: Anant Sharma, Ashwin Rajan
Abstract:
Customer experience is measured today based on defining a set of metrics and KPIs, setting up thresholds and defining triggers across those thresholds. While this is an effective way of measuring against a Key Performance Indicator ( referred to as KPI in the rest of the paper ), this approach cannot capture the various nuances that make up the overall customer experience. Customers consume a product or service at various levels, which is not reflected in metrics like Customer Satisfaction or Net Promoter Score, but also across other measurements like recurring revenue, frequency of service usage, e-learning and depth of usage. Here we explore an alternative method of measuring customer experience by flipping the traditional views. Rather than rolling customers up to a metric, we roll up metrics to hierarchies and then measure customer experience. This method allows any team to quantify customer experience across multiple touchpoints in a customer’s journey. We make use of various data sources which contain information for metrics like CXSAT, NPS, Renewals, and depths of service usage collected across a customer lifecycle. This data can be mined systematically to get linkages between different data points like geographies, business groups, products and time. Additional views can be generated by blending synthetic contexts into the data to show trends and top/bottom types of reports. We have created a framework that allows us to measure customer experience using the above logic.Keywords: analytics, customers experience, BI, business operations, KPIs, metrics
Procedia PDF Downloads 753276 A Hardware-in-the-loop Simulation for the Development of Advanced Control System Design for a Spinal Joint Wear Simulator
Authors: Kaushikk Iyer, Richard M Hall, David Keeling
Abstract:
Hardware-in-the-loop (HIL) simulation is an advanced technique for developing and testing complex real-time control systems. This paper presents the benefits of HIL simulation and how it can be implemented and used effectively to develop, test, and validate advanced control algorithms used in a spinal joint Wear simulator for the Tribological testing of spinal disc prostheses. spinal wear simulator is technologically the most advanced machine currently employed For the in-vitro testing of newly developed spinal Discimplants. However, the existing control techniques, such as a simple position control Does not allow the simulator to test non-sinusoidal waveforms. Thus, there is a need for better and advanced control methods that can be developed and tested Rigorouslybut safely before deploying it into the real simulator. A benchtop HILsetupis was created for experimentation, controller verification, and validation purposes, allowing different control strategies to be tested rapidly in a safe environment. The HIL simulation aspect in this setup attempts to replicate similar spinal motion and loading conditions. The spinal joint wear simulator containsa four-Barlinkpowered by electromechanical actuators. LabVIEW software is used to design a kinematic model of the spinal wear Simulator to Validatehow each link contributes towards the final motion of the implant under test. As a result, the implant articulates with an angular motion specified in the international standards, ISO-18192-1, that define fixed, simplified, and sinusoid motion and load profiles for wear testing of cervical disc implants. Using a PID controller, a velocity-based position control algorithm was developed to interface with the benchtop setup that performs HIL simulation. In addition to PID, a fuzzy logic controller (FLC) was also developed that acts as a supervisory controller. FLC provides intelligence to the PID controller by By automatically tuning the controller for profiles that vary in amplitude, shape, and frequency. This combination of the fuzzy-PID controller is novel to the wear testing application for spinal simulators and demonstrated superior performance against PIDwhen tested for a spectrum of frequency. Kaushikk Iyer is a Ph.D. Student at the University of Leeds and an employee at Key Engineering Solutions, Leeds, United Kingdom, (e-mail: [email protected], phone: +44 740 541 5502). Richard M Hall is with the University of Leeds, the United Kingdom as a professor in the Mechanical Engineering Department (e-mail: [email protected]). David Keeling is the managing director of Key Engineering Solutions, Leeds, United Kingdom (e-mail: [email protected]). Results obtained are successfully validated against the load and motion tolerances specified by the ISO18192-1 standard and fall within limits, that is, ±0.5° at the maxima and minima of the motion and ±2 % of the complete cycle for phasing. The simulation results prove the efficacy of the test setup using HIL simulation to verify and validate the accuracy and robustness of the prospective controller before its deployment into the spinal wear simulator. This method of testing controllers enables a wide range of possibilities to test advanced control algorithms that can potentially test even profiles of patients performing various dailyliving activities.Keywords: Fuzzy-PID controller, hardware-in-the-loop (HIL), real-time simulation, spinal wear simulator
Procedia PDF Downloads 1713275 Measuring Delay Using Software Defined Networks: Limitations, Challenges, and Suggestions for Openflow
Authors: Ahmed Alutaibi, Ganti Sudhakar
Abstract:
Providing better Quality-of-Service (QoS) to end users has been a challenging problem for researchers and service providers. Building applications relying on best effort network protocols hindered the adoption of guaranteed service parameters and, ultimately, Quality of Service. The introduction of Software Defined Networking (SDN) opened the door for a new paradigm shift towards a more controlled programmable configurable behavior. Openflow has been and still is the main implementation of the SDN vision. To facilitate better QoS for applications, the network must calculate and measure certain parameters. One of those parameters is the delay between the two ends of the connection. Using the power of SDN and the knowledge of application and network behavior, SDN networks can adjust to different conditions and specifications. In this paper, we use the capabilities of SDN to implement multiple algorithms to measure delay end-to-end not only inside the SDN network. The results of applying the algorithms on an emulated environment show that we can get measurements close to the emulated delay. The results also show that depending on the algorithm, load on the network and controller can differ. In addition, the transport layer handshake algorithm performs best among the tested algorithms. Out of the results and implementation, we show the limitations of Openflow and develop suggestions to solve them.Keywords: software defined networking, quality of service, delay measurement, openflow, mininet
Procedia PDF Downloads 1653274 Development of a Decision Model to Optimize Total Cost in Food Supply Chain
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
All along the length of the supply chain, fresh food firms face the challenge of managing both product quality, due to the perishable nature of the products, and product cost. This paper develops a method to assist logistics managers upstream in the fresh food supply chain in making cost optimized decisions regarding transportation, with the objective of minimizing the total cost while maintaining the quality of food products above acceptable levels. Considering the case of multiple fresh food products collected from multiple farms being transported to a warehouse or a retailer, this study develops a total cost model that includes various costs incurred during transportation. The practical application of the model is illustrated by using several computational intelligence approaches including Genetic Algorithms (GA), Fuzzy Genetic Algorithms (FGA) as well as an improved Simulated Annealing (SA) procedure applied with a repair mechanism for efficiency benchmarking. We demonstrate the practical viability of these approaches by using a simulation study based on pertinent data and evaluate the simulation outcomes. The application of the proposed total cost model was demonstrated using three approaches of GA, FGA and SA with a repair mechanism. All three approaches are adoptable; however, based on the performance evaluation, it was evident that the FGA is more likely to produce a better performance than the other two approaches of GA and SA. This study provides a pragmatic approach for supporting logistics and supply chain practitioners in fresh food industry in making important decisions on the arrangements and procedures related to the transportation of multiple fresh food products to a warehouse from multiple farms in a cost-effective way without compromising product quality. This study extends the literature on cold supply chain management by investigating cost and quality optimization in a multi-product scenario from farms to a retailer and, minimizing cost by managing the quality above expected quality levels at delivery. The scalability of the proposed generic function enables the application to alternative situations in practice such as different storage environments and transportation conditions.Keywords: cost optimization, food supply chain, fuzzy sets, genetic algorithms, product quality, transportation
Procedia PDF Downloads 223