Search results for: average information ratio
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18531

Search results for: average information ratio

15591 A Preliminary Analysis of Sustainable Development in the Belgrade Metropolitan Area

Authors: Slavka Zeković, Miodrag Vujošević, Tamara Maričić

Abstract:

The paper provides a comprehensive analysis of the sustainable development in the Belgrade Metropolitan Region - BMA (level NUTS 2) preliminary evaluating the three chosen components: 1) economic growth and developmental changes; 2) competitiveness; and 3) territorial concentration and industrial specialization. First, we identified the main results of development changes and economic growth by applying Shift-share analysis on the metropolitan level. Second, the empirical evaluation of competitiveness in the BMA is based on the analysis of absolute and relative values of eight indicators by Spider method. Paper shows that the consideration of the national share, industrial mix and metropolitan/regional share in total Shift share of the BMA, as well as economic/functional specialization of the BMA indicate very strong process of deindustrialization. Allocative component of the BMA economic growth has positive value, reflecting the above-average sector productivity compared to the national average. Third, the important positive role of metropolitan/regional component in decomposition of the BMA economic growth is highlighted as one of the key results. Finally, comparative analysis of the industrial territorial concentration in the BMA in relation to Serbia is based on location quotient (LQ) or Balassa index as a valid measure. The results indicate absolute and relative differences in decrease of industry territorial concentration as well as inefficiency of utilizing territorial capital in the BMA. Results are important for the increase of regional competitiveness and territorial distribution in this area as well as for improvement of sustainable metropolitan and sector policies, planning and governance on this level.

Keywords: Belgrade Metropolitan Area (BMA), comprehensive analysis / evaluation, economic growth, competitiveness, sustainable development

Procedia PDF Downloads 440
15590 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.

Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing

Procedia PDF Downloads 167
15589 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique

Authors: Harpal Singh, Sakshi Batra

Abstract:

The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.

Keywords: discrete wavelet transform, robustness, video watermarking, watermark

Procedia PDF Downloads 218
15588 A Digital Health Approach: Using Electronic Health Records to Evaluate the Cost Benefit of Early Diagnosis of Alpha-1 Antitrypsin Deficiency in the UK

Authors: Sneha Shankar, Orlando Buendia, Will Evans

Abstract:

Alpha-1 antitrypsin deficiency (AATD) is a rare, genetic, and multisystemic condition. Underdiagnosis is common, leading to chronic pulmonary and hepatic complications, increased resource utilization, and additional costs to the healthcare system. Currently, there is limited evidence of the direct medical costs of AATD diagnosis in the UK. This study explores the economic impact of AATD patients during the 3 years before diagnosis and to identify the major cost drivers using primary and secondary care electronic health record (EHR) data. The 3 years before diagnosis time period was chosen based on the ability of our tool to identify patients earlier. The AATD algorithm was created using published disease criteria and applied to 148 known AATD patients’ EHR found in a primary care database of 936,148 patients (413,674 Biobank and 501,188 in a single primary care locality). Among 148 patients, 9 patients were flagged earlier by the tool and, on average, could save 3 (1-6) years per patient. We analysed 101 of the 148 AATD patients’ primary care journey and 20 patients’ Hospital Episode Statistics (HES) data, all of whom had at least 3 years of clinical history in their records before diagnosis. The codes related to laboratory tests, clinical visits, referrals, hospitalization days, day case, and inpatient admissions attributable to AATD were examined in this 3-year period before diagnosis. The average cost per patient was calculated, and the direct medical costs were modelled based on the mean prevalence of 100 AATD patients in a 500,000 population. A deterministic sensitivity analysis (DSA) of 20% was performed to determine the major cost drivers. Cost data was obtained from the NHS National tariff 2020/21, National Schedule of NHS Costs 2018/19, PSSRU 2018/19, and private care tariff. The total direct medical cost of one hundred AATD patients three years before diagnosis in primary and secondary care in the UK was £3,556,489, with an average direct cost per patient of £35,565. A vast majority of this total direct cost (95%) was associated with inpatient admissions (£3,378,229). The DSA determined that the costs associated with tier-2 laboratory tests and inpatient admissions were the greatest contributors to direct costs in primary and secondary care, respectively. This retrospective study shows the role of EHRs in calculating direct medical costs and the potential benefit of new technologies for the early identification of patients with AATD to reduce the economic burden in primary and secondary care in the UK.

Keywords: alpha-1 antitrypsin deficiency, costs, digital health, early diagnosis

Procedia PDF Downloads 160
15587 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling

Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines

Abstract:

The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.

Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances

Procedia PDF Downloads 150
15586 Beam Spatio-Temporal Multiplexing Approach for Improving Control Accuracy of High Contrast Pulse

Authors: Ping Li, Bing Feng, Junpu Zhao, Xudong Xie, Dangpeng Xu, Kuixing Zheng, Qihua Zhu, Xiaofeng Wei

Abstract:

In laser driven inertial confinement fusion (ICF), the control of the temporal shape of the laser pulse is a key point to ensure an optimal interaction of laser-target. One of the main difficulties in controlling the temporal shape is the foot part control accuracy of high contrast pulse. Based on the analysis of pulse perturbation in the process of amplification and frequency conversion in high power lasers, an approach of beam spatio-temporal multiplexing is proposed to improve the control precision of high contrast pulse. In the approach, the foot and peak part of high contrast pulse are controlled independently, which propagate separately in the near field, and combine together in the far field to form the required pulse shape. For high contrast pulse, the beam area ratio of the two parts is optimized, and then beam fluence and intensity of the foot part are increased, which brings great convenience to the control of pulse. Meanwhile, the near field distribution of the two parts is also carefully designed to make sure their F-numbers are the same, which is another important parameter for laser-target interaction. The integrated calculation results show that for a pulse with a contrast of up to 500, the deviation of foot part can be improved from 20% to 5% by using beam spatio-temporal multiplexing approach with beam area ratio of 1/20, which is almost the same as that of peak part. The research results are expected to bring a breakthrough in power balance of high power laser facility.

Keywords: inertial confinement fusion, laser pulse control, beam spatio-temporal multiplexing, power balance

Procedia PDF Downloads 144
15585 Integrating Dependent Material Planning Cycle into Building Information Management: A Building Information Management-Based Material Management Automation Framework

Authors: Faris Elghaish, Sepehr Abrishami, Mark Gaterell, Richard Wise

Abstract:

The collaboration and integration between all building information management (BIM) processes and tasks are necessary to ensure that all project objectives can be delivered. The literature review has been used to explore the state of the art BIM technologies to manage construction materials as well as the challenges which have faced the construction process using traditional methods. Thus, this paper aims to articulate a framework to integrate traditional material planning methods such as ABC analysis theory (Pareto principle) to analyse and categorise the project materials, as well as using independent material planning methods such as Economic Order Quantity (EOQ) and Fixed Order Point (FOP) into the BIM 4D, and 5D capabilities in order to articulate a dependent material planning cycle into BIM, which relies on the constructability method. Moreover, we build a model to connect between the material planning outputs and the BIM 4D and 5D data to ensure that all project information will be accurately presented throughout integrated and complementary BIM reporting formats. Furthermore, this paper will present a method to integrate between the risk management output and the material management process to ensure that all critical materials are monitored and managed under the all project stages. The paper includes browsers which are proposed to be embedded in any 4D BIM platform in order to predict the EOQ as well as FOP and alarm the user during the construction stage. This enables the planner to check the status of the materials on the site as well as to get alarm when the new order will be requested. Therefore, this will lead to manage all the project information in a single context and avoid missing any information at early design stage. Subsequently, the planner will be capable of building a more reliable 4D schedule by allocating the categorised material with the required EOQ to check the optimum locations for inventory and the temporary construction facilitates.

Keywords: building information management, BIM, economic order quantity, EOQ, fixed order point, FOP, BIM 4D, BIM 5D

Procedia PDF Downloads 166
15584 Algorithmic Fault Location in Complex Gas Networks

Authors: Soban Najam, S. M. Jahanzeb, Ahmed Sohail, Faraz Idris Khan

Abstract:

With the recent increase in reliance on Gas as the primary source of energy across the world, there has been a lot of research conducted on gas distribution networks. As the complexity and size of these networks grow, so does the leakage of gas in the distribution network. One of the most crucial factors in the production and distribution of gas is UFG or Unaccounted for Gas. The presence of UFG signifies that there is a difference between the amount of gas distributed, and the amount of gas billed. Our approach is to use information that we acquire from several specified points in the network. This information will be used to calculate the loss occurring in the network using the developed algorithm. The Algorithm can also identify the leakages at any point of the pipeline so we can easily detect faults and rectify them within minimal time, minimal efforts and minimal resources.

Keywords: FLA, fault location analysis, GDN, gas distribution network, GIS, geographic information system, NMS, network Management system, OMS, outage management system, SSGC, Sui Southern gas company, UFG, unaccounted for gas

Procedia PDF Downloads 614
15583 Unsupervised Echocardiogram View Detection via Autoencoder-Based Representation Learning

Authors: Andrea Treviño Gavito, Diego Klabjan, Sanjiv J. Shah

Abstract:

Echocardiograms serve as pivotal resources for clinicians in diagnosing cardiac conditions, offering non-invasive insights into a heart’s structure and function. When echocardiographic studies are conducted, no standardized labeling of the acquired views is performed. Employing machine learning algorithms for automated echocardiogram view detection has emerged as a promising solution to enhance efficiency in echocardiogram use for diagnosis. However, existing approaches predominantly rely on supervised learning, necessitating labor-intensive expert labeling. In this paper, we introduce a fully unsupervised echocardiographic view detection framework that leverages convolutional autoencoders to obtain lower dimensional representations and the K-means algorithm for clustering them into view-related groups. Our approach focuses on discriminative patches from echocardiographic frames. Additionally, we propose a trainable inverse average layer to optimize decoding of average operations. By integrating both public and proprietary datasets, we obtain a marked improvement in model performance when compared to utilizing a proprietary dataset alone. Our experiments show boosts of 15.5% in accuracy and 9.0% in the F-1 score for frame-based clustering, and 25.9% in accuracy and 19.8% in the F-1 score for view-based clustering. Our research highlights the potential of unsupervised learning methodologies and the utilization of open-sourced data in addressing the complexities of echocardiogram interpretation, paving the way for more accurate and efficient cardiac diagnoses.

Keywords: artificial intelligence, echocardiographic view detection, echocardiography, machine learning, self-supervised representation learning, unsupervised learning

Procedia PDF Downloads 18
15582 Computational Simulations and Assessment of the Application of Non-Circular TAVI Devices

Authors: Jonathon Bailey, Neil Bressloff, Nick Curzen

Abstract:

Transcatheter Aortic Valve Implantation (TAVI) devices are stent-like frames with prosthetic leaflets on the inside, which are percutaneously implanted. The device in a crimped state is fed through the arteries to the aortic root, where the device frame is opened through either self-expansion or balloon expansion, which reveals the prosthetic valve within. The frequency at which TAVI is being used to treat aortic stenosis is rapidly increasing. In time, TAVI is likely to become the favoured treatment over Surgical Valve Replacement (SVR). Mortality after TAVI has been associated with severe Paravalvular Aortic Regurgitation (PAR). PAR occurs when the frame of the TAVI device does not make an effective seal against the internal surface of the aortic root, allowing blood to flow backwards about the valve. PAR is common in patients and has been reported to some degree in as much as 76% of cases. Severe PAR (grade 3 or 4) has been reported in approximately 17% of TAVI patients resulting in post-procedural mortality increases from 6.7% to 16.5%. TAVI devices, like SVR devices, are circular in cross-section as the aortic root is often considered to be approximately circular in shape. In reality, however, the aortic root is often non-circular. The ascending aorta, aortic sino tubular junction, aortic annulus and left ventricular outflow tract have an average ellipticity ratio of 1.07, 1.09, 1.29, and 1.49 respectively. An elliptical aortic root does not severely affect SVR, as the leaflets are completely removed during the surgical procedure. However, an elliptical aortic root can inhibit the ability of the circular Balloon-Expandable (BE) TAVI devices to conform to the interior of the aortic root wall, which increases the risk of PAR. Self-Expanding (SE) TAVI devices are considered better at conforming to elliptical aortic roots, however the valve leaflets were not designed for elliptical function, furthermore the incidence of PAR is greater in SE devices than BE devices (19.8% vs. 12.2% respectively). If a patient’s aortic root is too severely elliptical, they will not be suitable for TAVI, narrowing the treatment options to SVR. It therefore follows that in order to increase the population who can undergo TAVI, and reduce the risk associated with TAVI, non-circular devices should be developed. Computational simulations were employed to further advance our understanding of non-circular TAVI devices. Radial stiffness of the TAVI devices in multiple directions, frame bending stiffness and resistance to balloon induced expansion are all computationally simulated. Finally, a simulation has been developed that demonstrates the expansion of TAVI devices into a non-circular patient specific aortic root model in order to assess the alterations in deployment dynamics, PAR and the stresses induced in the aortic root.

Keywords: tavi, tavr, fea, par, fem

Procedia PDF Downloads 436
15581 Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information

Authors: Wei-Jong Yang, Wei-Hau Du, Pau-Choo Chang, Jar-Ferr Yang, Pi-Hsia Hung

Abstract:

The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations.

Keywords: color moments, visual thing recognition system, SIFT, color SIFT

Procedia PDF Downloads 458
15580 Statistical Design of Synthetic VP X-bar Control Chat Using Markov Chain Approach

Authors: Ali Akbar Heydari

Abstract:

Control charts are an important tool of statistical quality control. Thesecharts are used to detect and eliminate unwanted special causes of variation that occurred during aperiod of time. The design and operation of control charts require the determination of three design parameters: the sample size (n), the sampling interval (h), and the width coefficient of control limits (k). Thevariable parameters (VP) x-bar controlchart is the x-barchart in which all the design parameters vary between twovalues. These values are a function of the most recent process information. In fact, in the VP x-bar chart, the position of each sample point on the chart establishes the size of the next sample and the timeof its sampling. The synthetic x-barcontrol chartwhich integrates the x-bar chart and the conforming run length (CRL) chart, provides significant improvement in terms of detection power over the basic x-bar chart for all levels of mean shifts. In this paper, we introduce the syntheticVP x-bar control chart for monitoring changes in the process mean. To determine the design parameters, we used a statistical design based on the minimum out of control average run length (ARL) criteria. The optimal chart parameters of the proposed chart are obtained using the Markov chain approach. A numerical example is also done to show the performance of the proposed chart and comparing it with the other control charts. The results show that our proposed syntheticVP x-bar controlchart perform better than the synthetic x-bar controlchart for all shift parameter values. Also, the syntheticVP x-bar controlchart perform better than the VP x-bar control chart for the moderate or large shift parameter values.

Keywords: control chart, markov chain approach, statistical design, synthetic, variable parameter

Procedia PDF Downloads 152
15579 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties

Authors: E. Salem

Abstract:

Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.

Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames

Procedia PDF Downloads 113
15578 Neuron Imaging in Lateral Geniculate Nucleus

Authors: Sandy Bao, Yankang Bao

Abstract:

The understanding of information that is being processed in the brain, especially in the lateral geniculate nucleus (LGN), has been proven challenging for modern neuroscience and for researchers with a focus on how neurons process signals and images. In this paper, we are proposing a method to image process different colors within different layers of LGN, that is, green information in layers 4 & 6 and red & blue in layers 3 & 5 based on the surface dimension of layers. We take into consideration the images in LGN and visual cortex, and that the edge detected information from the visual cortex needs to be considered in order to return back to the layers of LGN, along with the image in LGN to form the new image, which will provide an improved image that is clearer, sharper, and making it easier to identify objects in the image. Matrix Laboratory (MATLAB) simulation is performed, and results show that the clarity of the output image has significant improvement.

Keywords: lateral geniculate nucleus, matrix laboratory, neuroscience, visual cortex

Procedia PDF Downloads 265
15577 Information Seeking and Evaluation Tasks to Enhance Multiliteracies in Health Education

Authors: Tuula Nygard

Abstract:

This study contributes to the pedagogical discussion on how to promote adolescents’ multiliteracies with the emphasis on information seeking and evaluation skills in contemporary media environments. The study is conducted in the school environment utilizing perspectives of educational sciences and information studies to health communication and teaching. The research focus is on the teacher role as a trusted person, who guides students to choose and use credible information sources. Evaluating the credibility of information may often be challenging. Specifically, children and adolescents may find it difficult to know what to believe and who to trust, for instance, in health and well-being communication. Thus, advanced multiliteracy skills are needed. In the school environment, trust is based on the teacher’s subject content knowledge, but also the teacher’s character and caring. Teacher’s benevolence and approachability generate trustworthiness, which lays the foundation for good interaction with students and further, for the teacher’s pedagogical authority. The study explores teachers’ perceptions of their pedagogical authority and the role of a trustee. In addition, the study examines what kind of multiliteracy practices teachers utilize in their teaching. The data will be collected by interviewing secondary school health education teachers during Spring 2019. The analysis method is a nexus analysis, which is an ethnographic research orientation. Classroom interaction as the interviewed teachers see it is scrutinized through a nexus analysis lens in order to expound a social action, where people, places, discourses, and objects are intertwined. The crucial social actions in this study are information seeking and evaluation situations, where the teacher and the students together assess the credibility of the information sources. The study is based on the hypothesis that a trustee’s opinions of credible sources and guidance in information seeking and evaluation affect students’, that is, trustors’ choices. In the school context, the teacher’s own experiences and perceptions of health-related issues cannot be brushed aside. Furthermore, adolescents are used to utilize digital technology for day-to-day information seeking, but the chosen information sources are often not very high quality. In the school, teachers are inclined to recommend familiar sources, such as health education textbook and web pages of well-known health authorities. Students, in turn, rely on the teacher’s guidance of credible information sources without using their own judgment. In terms of students’ multiliteracy competences, information seeking and evaluation tasks in health education are excellent opportunities to practice and enhance these skills. To distinguish the right information from a wrong one is particularly important in health communication because experts by experience are easy to find and their opinions are convincing. This can be addressed by employing the ideas of multiliteracy in the school subject health education and in teacher education and training.

Keywords: multiliteracies, nexus analysis, pedagogical authority, trust

Procedia PDF Downloads 103
15576 Intuitive Decision Making When Facing Risks

Authors: Katharina Fellnhofer

Abstract:

The more information and knowledge that technology provides, the more important are profoundly human skills like intuition, the skill of using nonconscious information. As our world becomes more complex, shaken by crises, and characterized by uncertainty, time pressure, ambiguity, and rapidly changing conditions, intuition is increasingly recognized as a key human asset. However, due to methodological limitations of sample size or time frame or a lack of real-world or cross-cultural scope, precisely how to measure intuition when facing risks on a nonconscious level remains unclear. In light of the measurement challenge related to intuition’s nonconscious nature, a technique is introduced to measure intuition via hidden images as nonconscious additional information to trigger intuition. This technique has been tested in a within-subject fully online design with 62,721 real-world investment decisions made by 657 subjects in Europe and the United States. Bayesian models highlight the technique’s potential to measure skill at using nonconscious information for conscious decision making. Over the long term, solving the mysteries of intuition and mastering its use could be of immense value in personal and organizational decision-making contexts.

Keywords: cognition, intuition, investment decisions, methodology

Procedia PDF Downloads 80
15575 Audio Information Retrieval in Mobile Environment with Fast Audio Classifier

Authors: Bruno T. Gomes, José A. Menezes, Giordano Cabral

Abstract:

With the popularity of smartphones, mobile apps emerge to meet the diverse needs, however the resources at the disposal are limited, either by the hardware, due to the low computing power, or the software, that does not have the same robustness of desktop environment. For example, in automatic audio classification (AC) tasks, musical information retrieval (MIR) subarea, is required a fast processing and a good success rate. However the mobile platform has limited computing power and the best AC tools are only available for desktop. To solve these problems the fast classifier suits, to mobile environments, the most widespread MIR technologies, seeking a balance in terms of speed and robustness. At the end we found that it is possible to enjoy the best of MIR for mobile environments. This paper presents the results obtained and the difficulties encountered.

Keywords: audio classification, audio extraction, environment mobile, musical information retrieval

Procedia PDF Downloads 539
15574 Studying the Influence of the Intellectual Assets on Strategy Implementation: Case Study, Modiran Ideh Pardaz Company

Authors: Farzam Chakherlouy, Amirmehdi Dokhanchi

Abstract:

Nowadays organizations have to identify, evaluate and manage intangible assets which enable them to provide maximum requirements to achieve their goals and strategies. Organizations also have to try to promote and improve these kinds of assets continuously. It seems necessary to implement developed strategies in today’s competitive world where all the organizations and companies spend great amounts of expenses for developing their own strategies. In fact, after determining strategies to be implemented, the management process is not completed and it will not have any effect on the success and existence of the organization until these strategies are implemented. The objective of this article is to define the intellectual capital and it components and studying the impact of intellectual capital on the implementation of strategy based upon the Bozbura model. Three dimensions of human capital, relational capital, and the structural capital. According to the test’s results, the correlation between the intellectual capital and three components of strategic implementation (leadership, human resource management, and culture) has not been approved yet. According to results of Friedman’s test in relation with the intellectual capital, the maximum inadequacy of this company is in the field of human capital (with an average of 3.59) and the minimum inadequacy is in the field of the relational capital (customer) with an average of 2.83. Besides, according to Friedman test in relation with implementation of the strategy, the maximum inadequacy relates to the culture of the organization and the corporate control with averages of 2.60 and 3.45 respectively. In addition, they demonstrate a good performance in scopes of human resources management and financial resources management strategies.

Keywords: Bozbura model, intellectual capital, strategic management, implementation of strategy, Modiran Ideh Pardaz company

Procedia PDF Downloads 417
15573 Spatiotemporal Propagation and Pattern of Epileptic Spike Predict Seizure Onset Zone

Authors: Mostafa Mohammadpour, Christoph Kapeller, Christy Li, Josef Scharinger, Christoph Guger

Abstract:

Interictal spikes provide valuable information on electrocorticography (ECoG), which aids in surgical planning for patients who suffer from refractory epilepsy. However, the shape and temporal dynamics of these spikes remain unclear. The purpose of this work was to analyze the shape of interictal spikes and measure their distance to the seizure onset zone (SOZ) to use in epilepsy surgery. Thirteen patients' data from the iEEG portal were retrospectively studied. For analysis, half an hour of ECoG data was used from each patient, with the data being truncated before the onset of a seizure. Spikes were first detected and grouped in a sequence, then clustered into interictal epileptiform discharges (IEDs) and non-IED groups using two-step clustering. The distance of the spikes from IED and non-IED groups to SOZ was quantified and compared using the Wilcoxon rank-sum test. Spikes in the IED group tended to be in SOZ or close to it, while spikes in the non-IED group were in distance of SOZ or non-SOZ area. At the group level, the distribution for sharp wave, positive baseline shift, slow wave, and slow wave to sharp wave ratio was significantly different for IED and non-IED groups. The distance of the IED cluster was 10.00mm and significantly closer to the SOZ than the 17.65mm for non-IEDs. These findings provide insights into the shape and spatiotemporal dynamics of spikes that could influence the network mechanisms underlying refractory epilepsy.

Keywords: spike propagation, spike pattern, clustering, SOZ

Procedia PDF Downloads 54
15572 Cognitive Performance Post Stroke Is Affected by the Timing of Evaluation

Authors: Ayelet Hersch, Corrine Serfaty, Sigal Portnoy

Abstract:

Stroke survivors commonly report persistent fatigue and sleep disruptions during rehabilitation and post-recovery. While limited research has explored the impact of stroke on a patient's chronotype, there is a gap in understanding the differences in cognitive performance based on treatment timing. Study objectives: (a) To characterize the sleep chronotype in sub-acute post-stroke individuals. (b) Explore cognitive task performance differences during preferred and non-preferred hours. (c) Examine the relationships between sleep quality and cognitive performance. For this intra-subject study, twenty participants (mean age 60.2±8.6) post-first stroke (6-12 weeks post stroke) underwent assessments at preferred and non-preferred chronotypic times. The assessment included demographic surveys, the Munich Chronotype Questionnaire, Montreal Cognitive Assessment (MoCA), Rivermead Behavioral Memory Test (RBMT), a fatigue questionnaire, and 4-5 days of actigraphy (wrist-worn wGT3X-BT, ActiGraph) to record sleep characteristics. Four sleep quality indices were extracted from actigraphy wristwatch recordings: The average of total sleep time per day (minutes), the average number of awakenings during the sleep period per day, the efficiency of sleep (total hours of sleep per day divided by hours spent in bed per day, averaged across the days and presented as percentage), and the Wake after Sleep Onset (WASO) index, indicating the average number of minutes elapsed from the onset of sleep to the first awakening. Stroke survivors exhibited an earlier sleep chronotype post-injury compared to pre-injury. Enhanced attention, as indicated by higher RBMT scores, occurred during preferred hours. Specifically, 30% of the study participants demonstrated an elevation in their final scores during their preferred hours, transitioning from the category of "mild memory impairment" to "normal memory." However, no significant differences emerged in executive functions, attention tasks, and MoCA scores between preferred and non-preferred hours. The Wake After Sleep Onset (WASO) index correlated with MoCA/RBMT scores during preferred hours (r=0.53/0.51, p=0.021/0.027, respectively). The number of awakenings correlated with MoCA letter task performance during non-preferred hours (r=0.45, p=0.044). Enhanced attention during preferred hours suggests a potential relationship between chronotype and cognitive performance, highlighting the importance of personalized rehabilitation strategies in stroke care. Further exploration of these relationships could contribute to optimizing the timing of cognitive interventions for stroke survivors.

Keywords: sleep chronotype, chronobiology, circadian rhythm, rehabilitation timing

Procedia PDF Downloads 56
15571 The Use of Online Courses as a Tool for Teaching in Education for Youth and Adults

Authors: Elineuda Do Socorro Santos Picanço Sousa, Ana Kerlly Souza da Costa

Abstract:

This paper presents the analysis of the information society as a plural, inclusive and participatory society, where it is necessary to give all citizens, especially young people, the right skills in order to develop skills so that they can understand and use information through of contemporary technologies; well as carry out a critical analysis, using and producing information and all sorts of messages and / or informational language codes. This conviction inspired this article, whose aim is to present current trends in the use of technology in distance education applied as an alternative and / or supplement to classroom teaching for Youth and Adults, concepts and actions, seeking to contribute to its development in the state of Amapá and specifically, the Center for Professional of Amapá Teaching Professor Josinete Oliveira Barroso - CEPAJOB.

Keywords: youth and adults education, Ead. Professional Education, online courses, CEPAJOB

Procedia PDF Downloads 636
15570 A Research Study of the Inclusiveness of VR Headsets for Higher Education

Authors: Fredrick Forster, Gareth Ward, Matthew Tubby, Pamela Lithgow, Anne Nortcliffe

Abstract:

This paper presents the results from a research study of random adult participants accessing one of four different commercially available Virtual Reality (VR) Head Mounted Displays (HMDs) and completing a post user experience reflection questionnaire. The research sort to understand how inclusive commercially available VR HMDs are and identify any associated barriers that could impact the widespread adoption of the devices, specifically in Higher Education (HE). In the UK, education providers are legally required under the Equality Act 2010 to ensure all education facilities are inclusive and reasonable adjustments can be applied appropriately. The research specifically aimed to identify the considerations that academics and learning technologists need to make when adopting the use of commercial VR HMDs in HE classrooms, namely cybersickness, user comfort, Interpupillary Distance, inclusiveness, and user perceptions of VR. The research approach was designed to build upon previously published research on user reflections on presence, usability, and overall HMD comfort, using quantitative and qualitative research methods by way of a questionnaire. The quantitative data included the recording of physical characteristics such as the distance between eye pupils, known as Interpupillary Distance (IPD). VR HMDs require each user’s IPD measurement to enable the focusing of the VR HMDs virtual camera output to the right position in front of the eyes of the user. In addition, the questionnaire captured users’ qualitative reflections and evaluations of the broader accessibility characteristics of the VR HMDs. The initial research activity was accomplished by enabling a random sample of visitors, staff, and students at Canterbury Christ Church University, Kent to use a VR HMD for a set period of time and asking them to complete the post user experience questionnaire. The study identified that there is little correlation between users who experience cyber sickness and car sickness. Also, users with a smaller IPD than average (typically associated with females) were able to use the VR HMDs successfully; however, users with a larger than average IPD reported an impeded experience. This indicates that there is reduced inclusiveness for the tested VR HMDs for users with a higher-than-average IPD which is typically associated with males of certain ethnicities. As action education research, these initial findings will be used to refine the research method and conduct further investigations with the aim to provide verification and validation of the accessibility of current commercial VR HMDs. The conference presentation will report on the research results of the initial study and subsequent follow up studies with a larger variety of adult volunteers.

Keywords: virtual reality, education technology, inclusive technology, higher education

Procedia PDF Downloads 63
15569 Implementation of an IoT Sensor Data Collection and Analysis Library

Authors: Jihyun Song, Kyeongjoo Kim, Minsoo Lee

Abstract:

Due to the development of information technology and wireless Internet technology, various data are being generated in various fields. These data are advantageous in that they provide real-time information to the users themselves. However, when the data are accumulated and analyzed, more various information can be extracted. In addition, development and dissemination of boards such as Arduino and Raspberry Pie have made it possible to easily test various sensors, and it is possible to collect sensor data directly by using database application tools such as MySQL. These directly collected data can be used for various research and can be useful as data for data mining. However, there are many difficulties in using the board to collect data, and there are many difficulties in using it when the user is not a computer programmer, or when using it for the first time. Even if data are collected, lack of expert knowledge or experience may cause difficulties in data analysis and visualization. In this paper, we aim to construct a library for sensor data collection and analysis to overcome these problems.

Keywords: clustering, data mining, DBSCAN, k-means, k-medoids, sensor data

Procedia PDF Downloads 369
15568 An Analytical Approach to Assess and Compare the Vulnerability Risk of Operating Systems

Authors: Pubudu K. Hitigala Kaluarachchilage, Champike Attanayake, Sasith Rajasooriya, Chris P. Tsokos

Abstract:

Operating system (OS) security is a key component of computer security. Assessing and improving OSs strength to resist against vulnerabilities and attacks is a mandatory requirement given the rate of new vulnerabilities discovered and attacks occurring. Frequency and the number of different kinds of vulnerabilities found in an OS can be considered an index of its information security level. In the present study five mostly used OSs, Microsoft Windows (windows 7, windows 8 and windows 10), Apple’s Mac and Linux are assessed for their discovered vulnerabilities and the risk associated with each. Each discovered and reported vulnerability has an exploitability score assigned in CVSS score of the national vulnerability database. In this study the risk from vulnerabilities in each of the five Operating Systems is compared. Risk Indexes used are developed based on the Markov model to evaluate the risk of each vulnerability. Statistical methodology and underlying mathematical approach is described. Initially, parametric procedures are conducted and measured. There were, however, violations of some statistical assumptions observed. Therefore the need for non-parametric approaches was recognized. 6838 vulnerabilities recorded were considered in the analysis. According to the risk associated with all the vulnerabilities considered, it was found that there is a statistically significant difference among average risk levels for some operating systems, indicating that according to our method some operating systems have been more risk vulnerable than others given the assumptions and limitations. Relevant test results revealing a statistically significant difference in the Risk levels of different OSs are presented.

Keywords: cybersecurity, Markov chain, non-parametric analysis, vulnerability, operating system

Procedia PDF Downloads 178
15567 Image Quality and Dose Optimisations in Digital and Computed Radiography X-ray Radiography Using Lumbar Spine Phantom

Authors: Elhussaien Elshiekh

Abstract:

A study was performed to management and compare radiation doses and image quality during Lumbar spine PA and Lumbar spine LAT, x- ray radiography using Computed Radiography (CR) and Digital Radiography (DR). Standard exposure factors such as kV, mAs and FFD used for imaging the Lumbar spine anthropomorphic phantom obtained from average exposure factors that were used with CR in five radiology centres. Lumbar spine phantom was imaged using CR and DR systems. Entrance surface air kerma (ESAK) was calculated X-ray tube output and patient exposure factor. Images were evaluated using visual grading system based on the European Guidelines on Quality Criteria for diagnostic radiographic images. The ESAK corresponding to each image was measured at the surface of the phantom. Six experienced specialists evaluated hard copies of all the images, the image score (IS) was calculated for each image by finding the average score of the Six evaluators. The IS value also was used to determine whether an image was diagnostically acceptable. The optimum recommended exposure factors founded here for Lumbar spine PA and Lumbar spine LAT, with respectively (80 kVp,25 mAs at 100 cm FFD) and (75 kVp,15 mAs at 100 cm FFD) for CR system, and (80 kVp,15 mAs at100 cm FFD) and (75 kVp,10 mAs at 100 cm FFD) for DR system. For Lumbar spine PA, the lowest ESAK value required to obtain a diagnostically acceptable image were 0.80 mGy for DR and 1.20 mGy for CR systems. Similarly for Lumbar spine LAT projection, the lowest ESAK values to obtain a diagnostically acceptable image were 0.62 mGy for DR and 0.76 mGy for CR systems. At standard kVp and mAs values, the image quality did not vary significantly between the CR and the DR system, but at higher kVp and mAs values, the DR images were found to be of better quality than CR images. In addition, the lower limit of entrance skin dose consistent with diagnostically acceptable DR images was 40% lower than that for CR images.

Keywords: image quality, dosimetry, radiation protection, optimization, digital radiography, computed radiography

Procedia PDF Downloads 44
15566 Numerical Analysis Of Stainless Steel Beam To Column Joints With Bolted Flush End Plates

Authors: Takwiir Tahriim Khan, Tausif Khalid, Mohammad Redwan Ahamed, Md Soebur Rahman

Abstract:

The mutual connection in joints has a significant impact on the safe and cost-effective design of steel structures. Generally, the end plates are welded at the end of the beam and columns are bolted with the end plates. Thus, the moment will be transferred at the interface, which is a critical segment at the connection. 3-D Finite Element Models (FEM) has been developed using ABAQUS 2017 software to predict the yield capacity of the end plate connections. The parameters used in this study are the depth, width, and thickness of the end plate, dimensions of the bolt, sectional and material properties of beams and columns. The influence width, depth, and thicknesses of the end plate connection on yield capacity were investigated through parametric studies. The results showed that, for increasing plate thickness from 0.3 inch to 0.8 inch by an increment of 0.1 inch the yield capacity increased by 2.85% on average, for decreasing the end plate depth from 13 inch to 11 inch the yield capacity increased by 25.4 %, and for decreasing the end plate width from 6.5 inch to 5.75 inch the yield capacity increased by 35.4%. Variation in yield capacity was also found by changing the beam and column section. Besides, the numerical results showed a good agreement with published experimental literature with an average variation of less than 8.3 % in yield capacity. So the study allows for a more effective combination of beam, column, and end plate dimensions.

Keywords: steel beam-column joints, finite element analysis, yield moment capacity, parametric study, ABAQUS, bolted joints, flush end plates, moment vs rotation curves

Procedia PDF Downloads 103
15565 A Case Study on the Drivers of Household Water Consumption for Different Socio-Economic Classes in Selected Communities of Metro Manila, Philippines

Authors: Maria Anjelica P. Ancheta, Roberto S. Soriano, Erickson L. Llaguno

Abstract:

The main purpose of this study is to examine whether there is a significant relationship between socio-economic class and household water supply demand, through determining or verifying the factors governing water use consumption patterns of households from a sampling from different socio-economic classes in Metro Manila, the national capital region of the Philippines. This study is also an opportunity to augment the lack of local academic literature due to the very few publications on urban household water demand after 1999. In over 600 Metro Manila households, a rapid survey was conducted on their average monthly water consumption and habits on household water usage. The questions in the rapid survey were based on an extensive review of literature on urban household water demand. Sample households were divided into socio-economic classes A-B and C-D. Cluster analysis, dummy coding and outlier tests were done to prepare the data for regression analysis. Subsequently, backward stepwise regression analysis was used in order to determine different statistical models to describe the determinants of water consumption. The key finding of this study is that the socio-economic class of a household in Metro Manila is a significant factor in water consumption. A-B households consume more water in contrast to C-D families based on the mean average water consumption for A-B and C-D households are 36.75 m3 and 18.92 m3, respectively. The most significant proxy factors of socio-economic class that were related to household water consumption were examined in order to suggest improvements in policy formulation and household water demand management.

Keywords: household water uses, socio-economic classes, urban planning, urban water demand management

Procedia PDF Downloads 286
15564 In vitro Determination of Carbonic Anhydrase Inhibition of the Flowers of Vanda Orchid, Vanda Tessellata Roxb. (1795) by Modified Colorimetric Maren T.H. (1960) Method

Authors: John Carlo Combista, Jimbert Tan

Abstract:

The orchid, Vanda tessellata was chosen by the researchers because of the presence of the constituents in the family Orchidaceae such as alkaloids, flavonoids and glycosides that might give an inhibition activity of the carbonic anhydrase enzyme. This study aimed to determine the in vitro inhibition of carbonic anhydrase of Vanda tessellata flower extract. With the use of modified colorimetric Maren T.H. (1960) method, the time in seconds each test solution changed its color after the rate of CO2 hydration were recorded. Two solvents were used: the semi-polar, 95% ethanol and the non-polar, dichloromethane solvents. The percent inhibition activity of carbonic anhydrase of the different concentrations of solvents ethanol (1%, 25% and 50%) and dichloromethane (1% and 10%) test solutions were determined. Results showed that the ethanol-based extract of Vanda tessellata in different concentrations showed an inhibitory effect while the dichloromethane-based extract of Vanda tessellata showed no inhibitory effect of carbonic anhydrase activity. For ethanol extract, the concentration with the highest activity was 50% followed by 25% which changed its color from red to yellow with an average time of 13.11 seconds and 11.57 seconds but 1% with an average time of 7.56 seconds did not exhibit an effect. The researchers recommend the isolation of the specific active constituents of Vanda tessellata that is responsible for the inhibitory effect of carbonic anhydrase enzyme. It is also recommended to utilize different blood types to observe different reactions to the inhibition of the carbonic anhydrase.

Keywords: carbonic anhydrase, inhibition, modified colorimetric Maren TH method, Vanda orchid

Procedia PDF Downloads 296
15563 The Derivation of a Four-Strain Optimized Mohr's Circle for Use in Experimental Reinforced Concrete Research

Authors: Edvard P. G. Bruun

Abstract:

One of the best ways of improving our understanding of reinforced concrete is through large-scale experimental testing. The gathered information is critical in making inferences about structural mechanics and deriving the mathematical models that are the basis for finite element analysis programs and design codes. An effective way of measuring the strains across a region of a specimen is by using a system of surface mounted Linear Variable Differential Transformers (LVDTs). While a single LVDT can only measure the linear strain in one direction, by combining several measurements at known angles a Mohr’s circle of strain can be derived for the whole region under investigation. This paper presents a method that can be used by researchers, which improves the accuracy and removes experimental bias in the calculation of the Mohr’s circle, using four rather than three independent strain measurements. Obtaining high quality strain data is essential, since knowing the angular deviation (shear strain) and the angle of principal strain in the region are important properties in characterizing the governing structural mechanics. For example, the Modified Compression Field Theory (MCFT) developed at the University of Toronto, is a rotating crack model that requires knowing the direction of the principal stress and strain, and then calculates the average secant stiffness in this direction. But since LVDTs can only measure average strains across a plane (i.e., between discrete points), localized cracking and spalling that typically occur in reinforced concrete, can lead to unrealistic results. To build in redundancy and improve the quality of the data gathered, the typical experimental setup for a large-scale shell specimen has four independent directions (X, Y, H, and V) that are instrumented. The question now becomes, which three should be used? The most common approach is to simply discard one of the measurements. The problem is that this can produce drastically different answers, depending on the three strain values that are chosen. To overcome this experimental bias, and to avoid simply discarding valuable data, a more rigorous approach would be to somehow make use of all four measurements. This paper presents the derivation of a method to draw what is effectively a Mohr’s circle of 'best-fit', which optimizes the circle by using all four independent strain values. The four-strain optimized Mohr’s circle approach has been utilized to process data from recent large-scale shell tests at the University of Toronto (Ruggiero, Proestos, and Bruun), where analysis of the test data has shown that the traditional three-strain method can lead to widely different results. This paper presents the derivation of the method and shows its application in the context of two reinforced concrete shells tested in pure torsion. In general, the constitutive models and relationships that characterize reinforced concrete are only as good as the experimental data that is gathered – ensuring that a rigorous and unbiased approach exists for calculating the Mohr’s circle of strain during an experiment, is of utmost importance to the structural research community.

Keywords: reinforced concrete, shell tests, Mohr’s circle, experimental research

Procedia PDF Downloads 231
15562 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction

Authors: Mohammad Ghahramani, Fahimeh Saei Manesh

Abstract:

Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.

Keywords: soccer, analytics, machine learning, database

Procedia PDF Downloads 231