Search results for: reliability and validity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2572

Search results for: reliability and validity

1762 Urdu Text Extraction Method from Images

Authors: Samabia Tehsin, Sumaira Kausar

Abstract:

Due to the vast increase in the multimedia data in recent years, efficient and robust retrieval techniques are needed to retrieve and index images/ videos. Text embedded in the images can serve as the strong retrieval tool for images. This is the reason that text extraction is an area of research with increasing attention. English text extraction is the focus of many researchers but very less work has been done on other languages like Urdu. This paper is focusing on Urdu text extraction from video frames. This paper presents a text detection feature set, which has the ability to deal up with most of the problems connected with the text extraction process. To test the validity of the method, it is tested on Urdu news dataset, which gives promising results.

Keywords: caption text, content-based image retrieval, document analysis, text extraction

Procedia PDF Downloads 491
1761 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 281
1760 Immersive and Non-Immersive Virtual Reality Applied to the Cervical Spine Assessment

Authors: Pawel Kiper, Alfonc Baba, Mahmoud Alhelou, Giorgia Pregnolato, Michela Agostini, Andrea Turolla

Abstract:

Impairment of cervical spine mobility is often related to pain triggered by musculoskeletal disorders or direct traumatic injuries of the spine. To date, these disorders are assessed with goniometers and inclinometers, which are the most popular devices used in clinical settings. Nevertheless, these technologies usually allow measurement of no more than two-dimensional range of motion (ROM) quotes in static conditions. Conversely, the wide use of motion tracking systems able to measure 3 to 6 degrees of freedom dynamically, while performing standard ROM assessment, are limited due to technical complexities in preparing the setup and high costs. Thus, motion tracking systems are primarily used in research. These systems are an integral part of virtual reality (VR) technologies, which can be used for measuring spine mobility. To our knowledge, the accuracy of VR measure has not yet been studied within virtual environments. Thus, the aim of this study was to test the reliability of a protocol for the assessment of sensorimotor function of the cervical spine in a population of healthy subjects and to compare whether using immersive or non-immersive VR for visualization affects the performance. Both VR assessments consisted of the same five exercises and random sequence determined which of the environments (i.e. immersive or non-immersive) was used as first assessment. Subjects were asked to perform head rotation (right and left), flexion, extension and lateral flexion (right and left side bending). Each movement was executed five times. Moreover, the participants were invited to perform head reaching movements i.e. head movements toward 8 targets placed along a circular perimeter each 45°, visualized one-by-one in random order. Finally, head repositioning movement was obtained by head movement toward the same 8 targets as for reaching and following reposition to the start point. Thus, each participant performed 46 tasks during assessment. Main measures were: ROM of rotation, flexion, extension, lateral flexion and complete kinematics of the cervical spine (i.e. number of completed targets, time of execution (seconds), spatial length (cm), angle distance (°), jerk). Thirty-five healthy participants (i.e. 14 males and 21 females, mean age 28.4±6.47) were recruited for the cervical spine assessment with immersive and non-immersive VR environments. Comparison analysis demonstrated that: head right rotation (p=0.027), extension (p=0.047), flexion (p=0.000), time (p=0.001), spatial length (p=0.004), jerk target (p=0.032), trajectory repositioning (p=0.003), and jerk target repositioning (p=0.007) were significantly better in immersive than non-immersive VR. A regression model showed that assessment in immersive VR was influenced by height, trajectory repositioning (p<0.05), and handedness (p<0.05), whereas in non-immersive VR performance was influenced by height, jerk target (p=0.002), head extension, jerk target repositioning (p=0.002), and by age, head flex/ext, trajectory repositioning, and weight (p=0.040). The results of this study showed higher accuracy of cervical spine assessment when executed in immersive VR. The assessment of ROM and kinematics of the cervical spine can be affected by independent and dependent variables in both immersive and non-immersive VR settings.

Keywords: virtual reality, cervical spine, motion analysis, range of motion, measurement validity

Procedia PDF Downloads 145
1759 Element-Independent Implementation for Method of Lagrange Multipliers

Authors: Gil-Eon Jeong, Sung-Kie Youn, K. C. Park

Abstract:

Treatment for the non-matching interface is an important computational issue. To handle this problem, the method of Lagrange multipliers including classical and localized versions are the most popular technique. It essentially imposes the interface compatibility conditions by introducing Lagrange multipliers. However, the numerical system becomes unstable and inefficient due to the Lagrange multipliers. The interface element-independent formulation that does not include the Lagrange multipliers can be obtained by modifying the independent variables mathematically. Through this modification, more efficient and stable system can be achieved while involving equivalent accuracy comparing with the conventional method. A numerical example is conducted to verify the validity of the presented method.

Keywords: element-independent formulation, interface coupling, methods of Lagrange multipliers, non-matching interface

Procedia PDF Downloads 393
1758 Applying Sociometer Theory to Different Age Groups and Groups Differences regarding State Self-Esteem Sensitivity

Authors: Yun Yu Stephanie Law

Abstract:

Sociometer Theory is well tested among young adults in western population, however, limited research is found for other age groups, like adolescent and middle-adulthood in Asia population. Thus, one of the main purposes of this study is to verify the validity of Sociometer Theory in different age groups among Asian. To be specific, we hypothesized that an increase in one’s perceived social rejection is associated to a decrease in his/her state self-esteem among all age groups in Asian population. And we expected that this association can be found among all age groups including adolescent, young adults and middle-adults group in our first study. In this way, we can verify the validity of Sociometer Theory across different age groups as well as its significance in Asian population. Furthermore, those participants who received rejection about ‘mate-role’ would also receive some negative feedbacks regarding their current/future capacity of being a good mate. Results suggested that participants’ state self-esteem sensitivity for mating-capacity rejection is higher when comparing to that of friend-capacity rejection, i.e. greater drop in state self-esteem when receiving mating-capacity feedbacks then receiving friend-capacity feedbacks. These results, however, is just applicable on young adults. Thus, the main purpose of study two would be testing the state self-esteem sensitivity towards social rejection in different domains among three age groups. We hypothesized that group differences would be found for three age groups regarding state self-esteem sensitivity. Research question 1: perceived social rejection is associated to decrease in state self-esteem, is applicable among different age groups in Asia population. Research question 2: there are significant group differences for three age groups regarding state self-esteem sensitivity. Methods: 300 subjects are divided into three age groups, adolescents group, young adult group and middle-adult group, with 100 subjects in each group. Two questionnaires were used in testing this fundamental concept. Subjects were then asked to rate themselves on questionnaire in measuring their current state self-esteem in order to obtain the baseline measurements for later comparison. In order to avoid demand characteristics from subjects, other unrelated tasks like word matching were also given after the first test. Results: A positive correlation between scores in questionnaire 1 and questionnaire 2 among all age groups. Conclusion: State self-esteem decrease to both imagined social rejection (study1) and experienced social rejection (study2). Moreover, level of decrease in state self-esteem vary when receiving different domains of social rejection. Implications: a better understanding of self-esteem development for various age group might bring insights for education systems and policies for teaching approaches and learning methods among different age groups.

Keywords: state self-esteem, social rejection, stage theory, self-feelings

Procedia PDF Downloads 215
1757 A Study on Analysis of Magnetic Field in Induction Generator for Small Francis Turbine Generator

Authors: Young-Kwan Choi, Han-Sang Jeong, Yeon-Ho Ok, Jae-Ho Choi

Abstract:

The purpose of this study is to verify validity of design by testing output of induction generator through finite element analysis before manufacture of induction generator designed. Characteristics in the operating domain of induction generator can be understood through analysis of magnetic field according to load (rotational speed) of induction generator. Characteristics of induction generator such as induced voltage, current, torque, magnetic flux density (magnetic flux saturation), and loss can be predicted by analysis of magnetic field.

Keywords: electromagnetic analysis, induction generator, small hydro power generator, small francis turbine generator

Procedia PDF Downloads 1455
1756 Robust Variogram Fitting Using Non-Linear Rank-Based Estimators

Authors: Hazem M. Al-Mofleh, John E. Daniels, Joseph W. McKean

Abstract:

In this paper numerous robust fitting procedures are considered in estimating spatial variograms. In spatial statistics, the conventional variogram fitting procedure (non-linear weighted least squares) suffers from the same outlier problem that has plagued this method from its inception. Even a 3-parameter model, like the variogram, can be adversely affected by a single outlier. This paper uses the Hogg-Type adaptive procedures to select an optimal score function for a rank-based estimator for these non-linear models. Numeric examples and simulation studies will demonstrate the robustness, utility, efficiency, and validity of these estimates.

Keywords: asymptotic relative efficiency, non-linear rank-based, rank estimates, variogram

Procedia PDF Downloads 410
1755 Empirical Study on Causes of Project Delays

Authors: Khan Farhan Rafat, Riaz Ahmed

Abstract:

Renowned offshore organizations are drifting towards collaborative exertion to win and implement international projects for business gains. However, devoid of financial constraints, with the availability of skilled professionals, and despite improved project management practices through state-of-the-art tools and techniques, project delays have become a norm these days. This situation calls for exploring the factor(s) affecting the bonding between project management performance and project success. In the context of the well-known 3M’s of project management (that is, manpower, machinery, and materials), machinery and materials are dependent upon manpower. Because the body of knowledge inveterate on the influence of national culture on men, hence, the realization of the impact on the link between project management performance and project success need to be investigated in detail to arrive at the possible cause(s) of project delays. This research initiative was, therefore, undertaken to fill the research gap. The unit of analysis for the proposed research excretion was the individuals who had worked on skyscraper construction projects. In reverent studies, project management is best described using construction examples. It is due to this reason that the project oriented city of Dubai was chosen to reconnoiter on causes of project delays. A structured questionnaire survey was disseminated online with the courtesy of the Project Management Institute local chapter to carry out the cross-sectional study. The Construction Industry Institute, Austin, of the United States of America along with 23 high-rise builders in Dubai were also contacted by email requesting for their contribution to the study and providing them with the online link to the survey questionnaire. The reliability of the instrument was warranted using Cronbach’s alpha coefficient of 0.70. The appropriateness of sampling adequacy and homogeneity in variance was ensured by keeping Kaiser–Meyer–Olkin (KMO) and Bartlett’s test of sphericity in the range ≥ 0.60 and < 0.05, respectively. Factor analysis was used to verify construct validity. During exploratory factor analysis, all items were loaded using a threshold of 0.4. Four hundred and seventeen respondents, including members from top management, project managers, and project staff, contributed to the study. The link between project management performance and project success was significant at 0.01 level (2-tailed), and 0.05 level (2-tailed) for Pearson’s correlation. Before initiating the moderator analysis test for linearity, multicollinearity, outliers, leverage points and influential cases, test for homoscedasticity and normality were carried out which are prerequisites for conducting moderator review. The moderator analysis, using a macro named PROCESS, was performed to verify the hypothesis that national culture has an influence on the said link. The empirical findings, when compared with Hofstede's results, showed high power distance as the cause of construction project delays in Dubai. The research outcome calls for the project sponsors and top management to reshape their project management strategy and allow for low power distance between management and project personnel for timely completion of projects.

Keywords: causes of construction project delays, construction industry, construction management, power distance

Procedia PDF Downloads 201
1754 Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features

Authors: Vesna Kirandziska, Nevena Ackovska, Ana Madevska Bogdanova

Abstract:

The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.

Keywords: emotion recognition, facial recognition, signal processing, machine learning

Procedia PDF Downloads 302
1753 Efficient Storage in Cloud Computing by Using Index Replica

Authors: Bharat Singh Deora, Sushma Satpute

Abstract:

Cloud computing is based on resource sharing. Like other resources which can be shareable, storage is a resource which can be shared. We can use collective resources of storage from different locations and maintain a central index table for storage details. The storage combining of different places can form a suitable data storage which is operated from one location and is very economical. Proper storage of data should improve data reliability & availability and bandwidth utilization. Also, we are moving the contents of one storage to other according to our need.

Keywords: cloud computing, cloud storage, Iaas, PaaS, SaaS

Procedia PDF Downloads 319
1752 Older Consumer’s Willingness to Trust Social Media Advertising: An Australian Case

Authors: Simon J. Wilde, David M. Herold, Michael J. Bryant

Abstract:

Social media networks have become the hotbed for advertising activities, due mainly to their increasing consumer/user base, and secondly, owing to the ability of marketers to accurately measure ad exposure and consumer-based insights on such networks. More than half of the world’s population (4.8 billion) now uses social media (60%), with 150 million new users having come online within the last 12 months (to June 2022). As the use of social media networks by users grows, key business strategies used for interacting with these potential customers have matured, especially social media advertising. Unlike other traditional media outlets, social media advertising is highly interactive and digital channel-specific. Social media advertisements are clearly targetable, providing marketers with an extremely powerful marketing tool. Yet despite the measurable benefits afforded to businesses engaged in social media advertising, recent controversies (such as the relationship between Facebook and Cambridge Analytica in 2018) have only heightened the role trust and privacy play within these social media networks. The purpose of this exploratory paper is to investigate the extent to which social media users trust social media advertising. Understanding this relationship will fundamentally assist marketers in better understanding social media interactions and their implications for society. Using a web-based quantitative survey instrument, survey participants were recruited via a reputable online panel survey site. Respondents to the survey represented social media users from all states and territories within Australia. Completed responses were received from a total of 258 social media users. Survey respondents represented all core age demographic groupings, including Gen Z/Millennials (18-45 years = 60.5% of respondents) and Gen X/Boomers (46-66+ years = 39.5% of respondents). An adapted ADTRUST scale, using a 20 item 7-point Likert scale, measured trust in social media advertising. The ADTRUST scale has been shown to be a valid measure of trust in advertising within traditional different media, such as broadcast media and print media, and more recently, the Internet (as a broader platform). The adapted scale was validated through exploratory factor analysis (EFA), resulting in a three-factor solution. These three factors were named reliability, usefulness and affect, and the willingness to rely on. Factor scores (weighted measures) were then calculated for these factors. Factor scores are estimates of the scores survey participants would have received on each of the factors had they been measured directly, with the following results recorded (Reliability = 4.68/7; Usefulness and Affect = 4.53/7; and Willingness to Rely On = 3.94/7). Further statistical analysis (independent samples t-test) determined the difference in factor scores between the factors when age (Gen Z/Millennials vs. Gen X/Boomers) was utilised as the independent, categorical variable. The results showed the difference in mean scores across all three factors to be statistically significant (p<0.05) for these two core age groupings: Gen Z/Millennials Reliability = 4.90/7 vs Gen X/Boomers Reliability = 4.34/7; Gen Z/Millennials Usefulness and Affect = 4.85/7 vs Gen X/Boomers Usefulness and Affect = 4.05/7; and Gen Z/Millennials Willingness to Rely On = 4.53/7 vs Gen X/Boomers Willingness to Rely On = 3.03/7. The results clearly indicate that older social media users lack trust in the quality of information conveyed in social media ads, when compared to younger, more social media-savvy consumers. This is especially evident with respect to Factor 3 (Willingness to Rely On), whose underlying variables reflect one’s behavioural intent to act based on the information conveyed in advertising. These findings can be useful to marketers, advertisers, and brand managers in that the results highlight a critical need to design ‘authentic’ advertisements on social media sites to better connect with these older users, in an attempt to foster positive behavioural responses from within this large demographic group – whose engagement with social media sites continues to increase year on year.

Keywords: social media advertising, trust, older consumers, online

Procedia PDF Downloads 69
1751 A Cognitive Schema of Architectural Designing Activity

Authors: Abdelmalek Arrouf

Abstract:

This article sets up a cognitive schema of the architectural designing activity. It begins by outlining, theoretically, an a priori model of its general cognitive mechanisms. The obtained theoretical framework represents the designing activity as a complex system composed of three interrelated subsystems of cognitive actions: a subsystem of meaning production, one of morphology production and finally a subsystem of navigation between the two formers. A protocol analysis that uses statistical and informational tools is then used to measure the validity of the built schema. The model thus achieved shows that the designer begins by conceiving abstract meanings, which he then translates into shapes. That’s why we call it a semio-morphic model of the designing activity.

Keywords: designing actions, model of the design process, morphosis, protocol analysis, semiosis

Procedia PDF Downloads 154
1750 CDM-Based Controller Design for High-Frequency Induction Heating System with LLC Tank

Authors: M. Helaimi, R. Taleb, D. Benyoucef, B. Belmadani

Abstract:

This paper presents the design of a polynomial controller with coefficient diagram method (CDM). This controller is used to control the output power of high frequency resonant inverter with LLC tank. One of the most important problems associated with the proposed inverter is achieving ZVS operating during the induction heating process. To overcome this problem, asymmetrical voltage cancellation (AVC) control technique is proposed. The phased look loop (PLL) is used to track the natural frequency of the system. The small signal model of the system with the proposed control is obtained using extending describing function method (EDM). The validity of the proposed control is verified by simulation results.

Keywords: induction heating, AVC control, CDM, PLL, resonant inverter

Procedia PDF Downloads 649
1749 Global Solar Irradiance: Data Imputation to Analyze Complementarity Studies of Energy in Colombia

Authors: Jeisson A. Estrella, Laura C. Herrera, Cristian A. Arenas

Abstract:

The Colombian electricity sector has been transforming through the insertion of new energy sources to generate electricity, one of them being solar energy, which is being promoted by companies interested in photovoltaic technology. The study of this technology is important for electricity generation in general and for the planning of the sector from the perspective of energy complementarity. Precisely in this last approach is where the project is located; we are interested in answering the concerns about the reliability of the electrical system when climatic phenomena such as El Niño occur or in defining whether it is viable to replace or expand thermoelectric plants. Reliability of the electrical system when climatic phenomena such as El Niño occur, or to define whether it is viable to replace or expand thermoelectric plants with renewable electricity generation systems. In this regard, some difficulties related to the basic information on renewable energy sources from measured data must first be solved, as these come from automatic weather stations. Basic information on renewable energy sources from measured data, since these come from automatic weather stations administered by the Institute of Hydrology, Meteorology and Environmental Studies (IDEAM) and, in the range of study (2005-2019), have significant amounts of missing data. For this reason, the overall objective of the project is to complete the global solar irradiance datasets to obtain time series to develop energy complementarity analyses in a subsequent project. Global solar irradiance data sets to obtain time series that will allow the elaboration of energy complementarity analyses in the following project. The filling of the databases will be done through numerical and statistical methods, which are basic techniques for undergraduate students in technical areas who are starting out as researchers technical areas who are starting out as researchers.

Keywords: time series, global solar irradiance, imputed data, energy complementarity

Procedia PDF Downloads 53
1748 Development of an Instrument Assessing Participants’ Motivation on Assigning Monetary Value to Quality of Life

Authors: Afentoula Mavrodi, Andreas Georgiou, Georgios Tsiotras, Vassilis Aletras

Abstract:

Placing a monetary value on a quality-adjusted-life-year (QALY) is of utmost importance in economic evaluation. Identifying the population’s preferences is critical in order to understand some of the reasons driving variations in the assigned monetary value. Yet, evidence of the motives behind value assignment to a QALY by the general public is limited. Developing an instrument that would capture the population’s motives could be proven valuable to policy-makers, to guide them in allocating different values to a QALY based on users’ motivations. The aim of this study was to identify the most relevant motives and develop an appropriate instrument to assess them. To design the instrument, we employed: a) the EQ-5D-3L tool to assess participants’ current health status, and b) the Willingness-to-Pay (WTP) approach, within the Contingent Valuation (CV) Method framework, to elicit the monetary value. Advancing the open-ended approach adopted to assess solely protest bidders’ motives; a variety of follow-up item-specific statements were designed (deductive approach), aiming to evaluate motives of both protest bidders and participants willing to pay for the hypothetical treatment under consideration. The initial design of the survey instrument was the outcome of an extensive literature review. This instrument was revised based on 15 semi-structured interviews that took place in September 2018 and a pilot study held during two months (October-November) in 2018. Individuals with different educational, occupational and economical backgrounds and adequate verbal skills were recruited to complete the semi-structured interviews. The follow-up motivation statements of both protest bidders and those willing to pay were revised and rephrased after the semi-structured interviews. In total 4 statements for protest bidders and 3 statements for those willing to pay for the treatment were chosen to be included in the survey tool. Using the CATI (Computer Assisted Telephone Interview) method, a randomly selected sample of 97 persons living in Thessaloniki, Greece, completed the questionnaire on two occasions over a period of 4 weeks. Based on pilot study results, a test-retest reliability assessment was performed using the intra-class correlation coefficient (ICC). All statements formulated for protest bidders showed acceptable reliability (ICC values of 0.84 (95% CI: 0.67, 0.92) and above). Similarly, all statements for those willing to pay for the treatment showed high reliability (ICC values of 0.86 (95% CI: 0.78, 0.91) and above). Overall, the instrument designed in this study was reliable with regards to the item-specific statements assessing participants’ motivation. Validation of the instrument will take place in a future study. For a holistic WTP per QALY instrument, participants’ motivation must be addressed broadly. The instrument developed in this study captured a variety of motives and provided insight with regards to the method through which the latter are evaluated. Last but not least, it extended motive assessment to all study participants and not only protest bidders.

Keywords: contingent valuation method, instrument, motives, quality-adjusted life-year, willingness-to-pay

Procedia PDF Downloads 123
1747 Research on Coordination Strategies for Coordinating Supply Chain Based on Auction Mechanisms

Authors: Changtong Wang, Lingyun Wei

Abstract:

The combination of auctions and supply chains is of great significance in improving the supply chain management system and enhancing the efficiency of economic and social operations. To address the gap in research on supply chain strategies under the auction mechanism, a model is developed for the 1-N auction model in a complete information environment, and it is concluded that the two-part contract auction model for retailers in this model can achieve supply chain coordination. The model is validated by substituting the model into the scenario of a fresh-cut flower industry flower auction in exchange for arithmetic examples to further prove the validity of the conclusions.

Keywords: auction mechanism, supply chain coordination strategy, fresh cut flowers industry, supply chain management

Procedia PDF Downloads 112
1746 Evaluated Nuclear Data Based Photon Induced Nuclear Reaction Model of GEANT4

Authors: Jae Won Shin

Abstract:

We develop an evaluated nuclear data based photonuclear reaction model of GEANT4 for a more accurate simulation of photon-induced neutron production. The evaluated photonuclear data libraries from the ENDF/B-VII.1 are taken as input. Incident photon energies up to 140 MeV which is the threshold energy for the pion production are considered. For checking the validity of the use of the data-based model, we calculate the photoneutron production cross-sections and yields and compared them with experimental data. The results obtained from the developed model are found to be in good agreement with the experimental data for (γ,xn) reactions.

Keywords: ENDF/B-VII.1, GEANT4, photoneutron, photonuclear reaction

Procedia PDF Downloads 264
1745 Estimating the Life-Distribution Parameters of Weibull-Life PV Systems Utilizing Non-Parametric Analysis

Authors: Saleem Z. Ramadan

Abstract:

In this paper, a model is proposed to determine the life distribution parameters of the useful life region for the PV system utilizing a combination of non-parametric and linear regression analysis for the failure data of these systems. Results showed that this method is dependable for analyzing failure time data for such reliable systems when the data is scarce.

Keywords: masking, bathtub model, reliability, non-parametric analysis, useful life

Procedia PDF Downloads 546
1744 Multi-Scale Modelling of Thermal Wrinkling of Thin Membranes

Authors: Salim Belouettar, Kodjo Attipou

Abstract:

The thermal wrinkling behavior of thin membranes is investigated. The Fourier double scale series are used to deduce the macroscopic membrane wrinkling equations. The obtained equations account for the global and local wrinkling modes. Numerical examples are conducted to assess the validity of the approach developed. Compared to the finite element full model, the present model needs only few degrees of freedom to recover accurately the bifurcation curves and wrinkling paths. Different parameters such as membrane’s aspect ratio, wave number, pre-stressed membranes are discussed from a numerical point of view and the properties of the wrinkles (critical load, wavelength, size and location) are presented.

Keywords: wrinkling, thermal stresses, Fourier series, thin membranes

Procedia PDF Downloads 370
1743 Calculational-Experimental Approach of Radiation Damage Parameters on VVER Equipment Evaluation

Authors: Pavel Borodkin, Nikolay Khrennikov, Azamat Gazetdinov

Abstract:

The problem of ensuring of VVER type reactor equipment integrity is now most actual in connection with justification of safety of the NPP Units and extension of their service life to 60 years and more. First of all, it concerns old units with VVER-440 and VVER-1000. The justification of the VVER equipment integrity depends on the reliability of estimation of the degree of the equipment damage. One of the mandatory requirements, providing the reliability of such estimation, and also evaluation of VVER equipment lifetime, is the monitoring of equipment radiation loading parameters. In this connection, there is a problem of justification of such normative parameters, used for an estimation of the pressure vessel metal embrittlement, as the fluence and fluence rate (FR) of fast neutrons above 0.5 MeV. From the point of view of regulatory practice, a comparison of displacement per atom (DPA) and fast neutron fluence (FNF) above 0.5 MeV has a practical concern. In accordance with the Russian regulatory rules, neutron fluence F(E > 0.5 MeV) is a radiation exposure parameter used in steel embrittlement prediction under neutron irradiation. However, the DPA parameter is a more physically legitimate quantity of neutron damage of Fe based materials. If DPA distribution in reactor structures is more conservative as neutron fluence, this case should attract the attention of the regulatory authority. The purpose of this work was to show what radiation load parameters (fluence, DPA) on all VVER equipment should be under control, and give the reasonable estimations of such parameters in the volume of all equipment. The second task is to give the conservative estimation of each parameter including its uncertainty. Results of recently received investigations allow to test the conservatism of calculational predictions, and, as it has been shown in the paper, combination of ex-vessel measured data with calculated ones allows to assess unpredicted uncertainties which are results of specific unique features of individual equipment for VVER reactor. Some results of calculational-experimental investigations are presented in this paper.

Keywords: equipment integrity, fluence, displacement per atom, nuclear power plant, neutron activation measurements, neutron transport calculations

Procedia PDF Downloads 144
1742 Institutional Segmantation and Country Clustering: Implications for Multinational Enterprises Over Standardized Management

Authors: Jung-Hoon Han, Jooyoung Kwak

Abstract:

Distances between cultures, institutions are gaining academic attention once again since the classical debate on the validity of globalization. Despite the incessant efforts to define international segments with various concepts, no significant attempts have been made considering the institutional dimensions. Resource-based theory and institutional theory provides useful insights in assessing market environment and understanding when and how MNEs loose or gain advantages. This study consists of two parts: identifying institutional clusters and predicting the effect of MNEs’ origin on the applicability of competitive advantages. MNEs in one country cluster are expected to use similar management systems.

Keywords: institutional theory, resource-based theory, institutional environment, cultural dimensions, cluster analysis, standardized management

Procedia PDF Downloads 474
1741 Impact Analysis Based on Change Requirement Traceability in Object Oriented Software Systems

Authors: Sunil Tumkur Dakshinamurthy, Mamootil Zachariah Kurian

Abstract:

Change requirement traceability in object oriented software systems is one of the challenging areas in research. We know that the traces between links of different artifacts are to be automated or semi-automated in the software development life cycle (SDLC). The aim of this paper is discussing and implementing aspects of dynamically linking the artifacts such as requirements, high level design, code and test cases through the Extensible Markup Language (XML) or by dynamically generating Object Oriented (OO) metrics. Also, non-functional requirements (NFR) aspects such as stability, completeness, clarity, validity, feasibility and precision are discussed. We discuss this as a Fifth Taxonomy, which is a system vulnerability concern.

Keywords: artifacts, NFRs, OO metrics, SDLC, XML

Procedia PDF Downloads 322
1740 Temperature Profile Modelling in Flexible Pavement Design

Authors: Csaba Tóth, Éva Lakatos, László Pethő, Seoyoung Cho

Abstract:

The temperature effect on asphalt pavement structure is a crucial factor at the design stage. In this paper, by applying the German guidelines for temperature along the asphalt depth is estimated. The aim is to consider temperature profiles in different seasons in numerical modelling. The model is built with an elastic and isotropic solid element with 19 subdivisions of asphalt layers to reflect the temperature variation. Comparison with the simple three-layer pavement system (asphalt layers, base, and subgrade layers) will be followed to see the difference in result without temperature variation along with the depth. Finally, the fatigue life calculation was checked to prove the validity of the methodology of considering the temperature in the numerical modelling.

Keywords: temperature profile, flexible pavement modeling, finite element method, temperature modeling

Procedia PDF Downloads 250
1739 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals

Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor

Abstract:

This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.

Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers

Procedia PDF Downloads 61
1738 Enhancing Robustness in Federated Learning through Decentralized Oracle Consensus and Adaptive Evaluation

Authors: Peiming Li

Abstract:

This paper presents an innovative blockchain-based approach to enhance the reliability and efficiency of federated learning systems. By integrating a decentralized oracle consensus mechanism into the federated learning framework, we address key challenges of data and model integrity. Our approach utilizes a network of redundant oracles, functioning as independent validators within an epoch-based training system in the federated learning model. In federated learning, data is decentralized, residing on various participants' devices. This scenario often leads to concerns about data integrity and model quality. Our solution employs blockchain technology to establish a transparent and tamper-proof environment, ensuring secure data sharing and aggregation. The decentralized oracles, a concept borrowed from blockchain systems, act as unbiased validators. They assess the contributions of each participant using a Hidden Markov Model (HMM), which is crucial for evaluating the consistency of participant inputs and safeguarding against model poisoning and malicious activities. Our methodology's distinct feature is its epoch-based training. An epoch here refers to a specific training phase where data is updated and assessed for quality and relevance. The redundant oracles work in concert to validate data updates during these epochs, enhancing the system's resilience to security threats and data corruption. The effectiveness of this system was tested using the Mnist dataset, a standard in machine learning for benchmarking. Results demonstrate that our blockchain-oriented federated learning approach significantly boosts system resilience, addressing the common challenges of federated environments. This paper aims to make these advanced concepts accessible, even to those with a limited background in blockchain or federated learning. We provide a foundational understanding of how blockchain technology can revolutionize data integrity in decentralized systems and explain the role of oracles in maintaining model accuracy and reliability.

Keywords: federated learning system, block chain, decentralized oracles, hidden markov model

Procedia PDF Downloads 44
1737 Research on Axial End Flux Leakage and Detent Force of Transverse Flux PM Linear Machine

Authors: W. R. Li, J. K. Xia, R. Q. Peng, Z. Y. Guo, L. Jiang

Abstract:

According to 3D magnetic circuit of the transverse flux PM linear machine, distribution law is presented, and analytical expression of axial end flux leakage is derived using numerical method. Maxwell stress tensor is used to solve detent force of mover. A 3D finite element model of the transverse flux PM machine is built to analyze the flux distribution and detent force. Experimental results of the prototype verified the validity of axial end flux leakage and detent force theoretical derivation, the research on axial end flux leakage and detent force provides a valuable reference to other types of linear machine.

Keywords: axial end flux leakage, detent force, flux distribution, transverse flux PM linear machine

Procedia PDF Downloads 428
1736 Exciting Voltage Control for Efficiency Maximization for 2-D Omni-Directional Wireless Power Transfer Systems

Authors: Masato Sasaki, Masayoshi Yamamoto

Abstract:

The majority of wireless power transfer (WPT) systems transfer power in a directional manner. This paper describes a discrete exciting voltage control technique for WPT via magnetic resonant coupling with two orthogonal transmitter coils (2D omni-directional WPT system) which can maximize the power transfer efficiency in response to the change of coupling status. The theory allows the equations of the efficiency of the system to be determined at all the rate of the mutual inductance. The calculated results are included to confirm the advantage to one directional WPT system and the validity of the theory and the equations.

Keywords: wireless power transfer, omni-directional, orthogonal, efficiency

Procedia PDF Downloads 301
1735 Smart Contracts: Bridging the Divide Between Code and Law

Authors: Abeeb Abiodun Bakare

Abstract:

The advent of blockchain technology has birthed a revolutionary innovation: smart contracts. These self-executing contracts, encoded within the immutable ledger of a blockchain, hold the potential to transform the landscape of traditional contractual agreements. This research paper embarks on a comprehensive exploration of the legal implications surrounding smart contracts, delving into their enforceability and their profound impact on traditional contract law. The first section of this paper delves into the foundational principles of smart contracts, elucidating their underlying mechanisms and technological intricacies. By harnessing the power of blockchain technology, smart contracts automate the execution of contractual terms, eliminating the need for intermediaries and enhancing efficiency in commercial transactions. However, this technological marvel raises fundamental questions regarding legal enforceability and compliance with traditional legal frameworks. Moving beyond the realm of technology, the paper proceeds to analyze the legal validity of smart contracts within the context of traditional contract law. Drawing upon established legal principles, such as offer, acceptance, and consideration, we examine the extent to which smart contracts satisfy the requirements for forming a legally binding agreement. Furthermore, we explore the challenges posed by jurisdictional issues as smart contracts transcend physical boundaries and operate within a decentralized network. Central to this analysis is the examination of the role of arbitration and dispute resolution mechanisms in the context of smart contracts. While smart contracts offer unparalleled efficiency and transparency in executing contractual terms, disputes inevitably arise, necessitating mechanisms for resolution. We investigate the feasibility of integrating arbitration clauses within smart contracts, exploring the potential for decentralized arbitration platforms to streamline dispute resolution processes. Moreover, this paper explores the implications of smart contracts for traditional legal intermediaries, such as lawyers and judges. As smart contracts automate the execution of contractual terms, the role of legal professionals in contract drafting and interpretation may undergo significant transformation. We assess the implications of this paradigm shift for legal practice and the broader legal profession. In conclusion, this research paper provides a comprehensive analysis of the legal implications surrounding smart contracts, illuminating the intricate interplay between code and law. While smart contracts offer unprecedented efficiency and transparency in commercial transactions, their legal validity remains subject to scrutiny within traditional legal frameworks. By navigating the complex landscape of smart contract law, we aim to provide insights into the transformative potential of this groundbreaking technology.

Keywords: smart-contracts, law, blockchain, legal, technology

Procedia PDF Downloads 26
1734 Transmit Power Optimization for Cooperative Beamforming in Reverse-Link MIMO Ad-Hoc Networks

Authors: Younghyun Jeon, Seungjoo Maeng

Abstract:

In the Ad-hoc network, the great interests regarding MIMO scheme leads to their combination, which is also utilized into its applicable network. We manage the field of the problem into Reverse-link MIMO Ad-hoc Network (RMAN) and propose the methodology to maximize the data rate with its power consumption using Node-Cooperative beamforming technique. Based on the result of mathematical optimization formulation, we design the algorithm to construct optimal orthogonal weight vector according to channel feedback and control its transmission power according to QoS-pricing value level. In simulation results, we show the validity of the proposed mathematical optimization result and algorithm which mean that the sum-rate of each link is converged into some point.

Keywords: ad-hoc network, MIMO, cooperative beamforming, transmit power

Procedia PDF Downloads 379
1733 Pre- and Post-Analyses of Disruptive Quay Crane Scheduling Problem

Authors: K. -H. Yang

Abstract:

In the past, the quay crane operations have been well studied. There were a certain number of scheduling algorithms for quay crane operations, but without considering some nuisance factors that might disrupt the quay crane operations. For example, bad grapples make a crane unable to load or unload containers or a sudden strong breeze stops operations temporarily. Although these disruptive conditions randomly occur, they influence the efficiency of quay crane operations. The disruption is not considered in the operational procedures nor is evaluated in advance for its impacts. This study applies simulation and optimization approaches to develop structures of pre-analysis and post-analysis for the Quay Crane Scheduling Problem to deal with disruptive scenarios for quay crane operation. Numerical experiments are used for demonstrations for the validity of the developed approaches.

Keywords: disruptive quay crane scheduling, pre-analysis, post-analysis, disruption

Procedia PDF Downloads 208