Search results for: Marascuilo procedure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2226

Search results for: Marascuilo procedure

1836 A Heteroskedasticity Robust Test for Contemporaneous Correlation in Dynamic Panel Data Models

Authors: Andreea Halunga, Chris D. Orme, Takashi Yamagata

Abstract:

This paper proposes a heteroskedasticity-robust Breusch-Pagan test of the null hypothesis of zero cross-section (or contemporaneous) correlation in linear panel-data models, without necessarily assuming independence of the cross-sections. The procedure allows for either fixed, strictly exogenous and/or lagged dependent regressor variables, as well as quite general forms of both non-normality and heteroskedasticity in the error distribution. The asymptotic validity of the test procedure is predicated on the number of time series observations, T, being large relative to the number of cross-section units, N, in that: (i) either N is fixed as T→∞; or, (ii) N²/T→0, as both T and N diverge, jointly, to infinity. Given this, it is not expected that asymptotic theory would provide an adequate guide to finite sample performance when T/N is "small". Because of this, we also propose and establish asymptotic validity of, a number of wild bootstrap schemes designed to provide improved inference when T/N is small. Across a variety of experimental designs, a Monte Carlo study suggests that the predictions from asymptotic theory do, in fact, provide a good guide to the finite sample behaviour of the test when T is large relative to N. However, when T and N are of similar orders of magnitude, discrepancies between the nominal and empirical significance levels occur as predicted by the first-order asymptotic analysis. On the other hand, for all the experimental designs, the proposed wild bootstrap approximations do improve agreement between nominal and empirical significance levels, when T/N is small, with a recursive-design wild bootstrap scheme performing best, in general, and providing quite close agreement between the nominal and empirical significance levels of the test even when T and N are of similar size. Moreover, in comparison with the wild bootstrap "version" of the original Breusch-Pagan test our experiments indicate that the corresponding version of the heteroskedasticity-robust Breusch-Pagan test appears reliable. As an illustration, the proposed tests are applied to a dynamic growth model for a panel of 20 OECD countries.

Keywords: cross-section correlation, time-series heteroskedasticity, dynamic panel data, heteroskedasticity robust Breusch-Pagan test

Procedia PDF Downloads 433
1835 Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements

Authors: Shagufta Tabassum

Abstract:

The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. In this paper, we discuss the basic calibration and normalization procedure for time-domain reflectometry measurements. Our approach is to explain the different types of error occur during TDR measurements and how these errors can be eliminated or minimized.

Keywords: time domain reflectometry measurement techinque, cable and connector loss, oscilloscope loss, and normalization technique

Procedia PDF Downloads 207
1834 Motion Planning of SCARA Robots for Trajectory Tracking

Authors: Giovanni Incerti

Abstract:

The paper presents a method for a simple and immediate motion planning of a SCARA robot, whose end-effector has to move along a given trajectory; the calculation procedure requires the user to define in analytical form or by points the trajectory to be followed and to assign the curvilinear abscissa as function of the time. On the basis of the geometrical characteristics of the robot, a specifically developed program determines the motion laws of the actuators that enable the robot to generate the required movement; this software can be used in all industrial applications for which a SCARA robot has to be frequently reprogrammed, in order to generate various types of trajectories with different motion times.

Keywords: motion planning, SCARA robot, trajectory tracking, analytical form

Procedia PDF Downloads 319
1833 The Constraint of Machine Breakdown after a Match up Scheduling of Paper Manufacturing Industry

Authors: John M. Ikome

Abstract:

In the process of manufacturing, a machine breakdown usually forces a modified flow shop out of the prescribed state, this strategy reschedules part of the initial schedule to match up with the pre-schedule at some point with the objective to create a schedule that is reliable with the other production planning decisions like material flow, production and suppliers by utilizing a critical decision-making concept. We propose a rescheduling strategy and a match-up point that will have a determination procedure through an advanced feedback control mechanism to increase both the schedule quality and stability. These approaches are compared with alternative re-scheduling methods under different experimental settings.

Keywords: scheduling, heuristics, branch, integrated

Procedia PDF Downloads 408
1832 Feature Extraction and Classification Based on the Bayes Test for Minimum Error

Authors: Nasar Aldian Ambark Shashoa

Abstract:

Classification with a dimension reduction based on Bayesian approach is proposed in this paper . The first step is to generate a sample (parameter) of fault-free mode class and faulty mode class. The second, in order to obtain good classification performance, a selection of important features is done with the discrete karhunen-loeve expansion. Next, the Bayes test for minimum error is used to classify the classes. Finally, the results for simulated data demonstrate the capabilities of the proposed procedure.

Keywords: analytical redundancy, fault detection, feature extraction, Bayesian approach

Procedia PDF Downloads 528
1831 Least Support Orthogonal Matching Pursuit (LS-OMP) Recovery Method for Invisible Watermarking Image

Authors: Israa Sh. Tawfic, Sema Koc Kayhan

Abstract:

In this paper, first, we propose least support orthogonal matching pursuit (LS-OMP) algorithm to improve the performance, of the OMP (orthogonal matching pursuit) algorithm. LS-OMP algorithm adaptively chooses optimum L (least part of support), at each iteration. This modification helps to reduce the computational complexity significantly and performs better than OMP algorithm. Second, we give the procedure for the invisible image watermarking in the presence of compressive sampling. The image reconstruction based on a set of watermarked measurements is performed using LS-OMP.

Keywords: compressed sensing, orthogonal matching pursuit, restricted isometry property, signal reconstruction, least support orthogonal matching pursuit, watermark

Procedia PDF Downloads 339
1830 ¹⁸F-FDG PET/CT Impact on Staging of Pancreatic Cancer

Authors: Jiri Kysucan, Dusan Klos, Katherine Vomackova, Pavel Koranda, Martin Lovecek, Cestmir Neoral, Roman Havlik

Abstract:

Aim: The prognosis of patients with pancreatic cancer is poor. The median of survival after establishing diagnosis is 3-11 months without surgical treatment, 13-20 months with surgical treatment depending on the disease stage, 5-year survival is less than 5%. Radical surgical resection remains the only hope of curing the disease. Early diagnosis with valid establishment of tumor resectability is, therefore, the most important aim for patients with pancreatic cancer. The aim of the work is to evaluate the contribution and define the role of 18F-FDG PET/CT in preoperative staging. Material and Methods: In 195 patients (103 males, 92 females, median age 66,7 years, 32-88 years) with a suspect pancreatic lesion, as part of the standard preoperative staging, in addition to standard examination methods (ultrasonography, contrast spiral CT, endoscopic ultrasonography, endoscopic ultrasonographic biopsy), a hybrid 18F-FDG PET/CT was performed. All PET/CT findings were subsequently compared with standard staging (CT, EUS, EUS FNA), with peroperative findings and definitive histology in the operated patients as reference standards. Interpretation defined the extent of the tumor according to TNM classification. Limitations of resectability were local advancement (T4) and presence of distant metastases (M1). Results: PET/CT was performed in a total of 195 patients with a suspect pancreatic lesion. In 153 patients, pancreatic carcinoma was confirmed and of these patients, 72 were not indicated for radical surgical procedure due to local inoperability or generalization of the disease. The sensitivity of PET/CT in detecting the primary lesion was 92.2%, specificity was 90.5%. A false negative finding in 12 patients, a false positive finding was seen in 4 cases, positive predictive value (PPV) 97.2%, negative predictive value (NPV) 76,0%. In evaluating regional lymph nodes, sensitivity was 51.9%, specificity 58.3%, PPV 58,3%, NPV 51.9%. In detecting distant metastases, PET/CT reached a sensitivity of 82.8%, specificity was 97.8%, PPV 96.9%, NPV 87.0%. PET/CT found distant metastases in 12 patients, which were not detected by standard methods. In 15 patients (15.6%) with potentially radically resectable findings, the procedure was contraindicated based on PET/CT findings and the treatment strategy was changed. Conclusion: PET/CT is a highly sensitive and specific method useful in preoperative staging of pancreatic cancer. It improves the selection of patients for radical surgical procedures, who can benefit from it and decreases the number of incorrectly indicated operations.

Keywords: cancer, PET/CT, staging, surgery

Procedia PDF Downloads 249
1829 2-Dimensional Transition Metal Dichalcogenides for Photodetection and Biosensing Endoscopies After a 5-Year Follow-Up on Central Venous Access Receiving Home (HPN) Patients with Prophylaxis at Tertiary Healthcare Facility

Authors: Michelle Themalil, Celia Bueno, Rulla Al- Araji

Abstract:

Objective and Study: There are no established guidelines for antibiotic prophylaxis in children with central venous catheters (CVCs) on home parenteral nutrition (HPN), leading to varying practices across UK Centres. We hypothesize that children with intestinal failure are at increased risk for bacteraemia due to altered anatomy, dysmotility, inflammation, biofilm formation in long-term CVCs, and the use of central lines during procedures. Given the bacteraemia rates of up to 8% in upper and 25% in lower endoscopy for adults without central lines, we argue that prophylactic antibiotics are reasonable, given the increased risks faced by this high-risk group of children. Methods: We conducted a five-year review of patients with central venous access receiving home parenteral nutrition (HPN) who underwent endoscopies with antibiotic prophylaxis at our center (tertiary). We documented and analyzed post-procedure infections and their associated risk factors. Results: A total of 15 patients on HPN underwent 29 endoscopic procedures, including 4 upper, 9 combined upper and lower, and 16 combined upper, lower, and ileoscopy. Confirmed infection rates remained at 0% up to 28 days post-procedure. The agreed-upon prophylaxis regimen was implemented, with ciprofloxacin and metronidazole administered as the primary antibiotics. Notably, only 51.7% of patients received a peripheral cannula despite recommendations to avoid central line use during anesthesia, and 20.6% had small intestinal bacterial overgrowth. Conclusions: This study is the first to investigate post-endoscopy infection rates in pediatric patients on HPN. Despite a small sample size, we observed a 0% infection rate, significantly lower than reported rates in adults. These findings suggest that further research is warranted to explore the implications of antibiotic prophylaxis in this unique patient cohort and to establish guidelines that may enhance patient safety during endoscopic procedures.

Keywords: post endosopy infections, central venous access, home parenteral nutrition, intestinal failure

Procedia PDF Downloads 14
1828 Bi-Functional Natural Carboxylic Acid Catalysts for the Synthesis of Diethyl α-Aminophosphonates in Aqueous Media

Authors: Hellal Abdelkader, Chafaa Salah, Boudjemaa Fouzia

Abstract:

A new, convenient, and high yielding procedure for the preparation of diethyl α-aminophosphonates in water via Kabachnik-Fields reaction by one-pot reaction of aromatic aldehydes, ortho-aminophenols, and dialkylphosphites in the presence of a low catalytic amount of citric, malic, tartaric, and oxalic acids as a natural, bi-functional, and highly stable catalyst is described, the obtained products were characterized by elemental analyses, molar conductance, magnetic susceptibility, FTIR, Uv-Vis spectral data, NMR-C, NMR-H, and NMR-P analyses.

Keywords: α-aminophosphonates, aminophenols, natural acids, aqueous media, Kabachnik-Fields reaction

Procedia PDF Downloads 336
1827 Concept for Planning Sustainable Factories

Authors: T. Mersmann, P. Nyhuis

Abstract:

In the current economic climate, for many businesses it is generally no longer sufficient to pursue exclusively economic interests. Instead, integrating ecological and social goals into the corporate targets is becoming ever more important. However, the holistic integration of these new goals is missing from current factory planning approaches. This article describes the conceptual framework for a planning methodology for sustainable factories. To this end, the description of the key areas for action is followed by a description of the principal components for the systematization of sustainability for factories and their stakeholders. Finally, a conceptual framework is presented which integrates the components formulated into an established factory planning procedure.

Keywords: factory planning, stakeholder, systematization, sustainability

Procedia PDF Downloads 455
1826 Design of Compact UWB Multilayered Microstrip Filter with Wide Stopband

Authors: N. Azadi-Tinat, H. Oraizi

Abstract:

Design of compact UWB multilayered microstrip filter with E-shape resonator is presented, which provides wide stopband up to 20 GHz and arbitrary impedance matching. The design procedure is developed based on the method of least squares and theory of N-coupled transmission lines. The dimensions of designed filter are about 11 mm × 11 mm and the three E-shape resonators are placed among four dielectric layers. The average insertion loss in the passband is less than 1 dB and in the stopband is about 30 dB up to 20 GHz. Its group delay in the UWB region is about 0.5 ns. The performance of the optimized filter design perfectly agrees with the microwave simulation softwares.

Keywords: method of least square, multilayer microstrip filter, n-coupled transmission lines, ultra-wideband

Procedia PDF Downloads 393
1825 Study on Clarification of the Core Technology in a Monozukuri Company

Authors: Nishiyama Toshiaki, Tadayuki Kyountani, Nguyen Huu Phuc, Shigeyuki Haruyama, Oke Oktavianty

Abstract:

It is important to clarify the company’s core technology in product development process to strengthen their power in providing technology that meets the customer requirement. QFD method is adopted to clarify the core technology through identifying the high element technologies that are related to the voice of customer, and offer the most delightful features for customer. AHP is used to determine the importance of evaluating factors. A case study was conducted by using this approach in Japan’s Monozukuri Company (so called manufacturing company) to clarify their core technology based on customer requirements.

Keywords: core technology, QFD, voices of customer, analysis procedure

Procedia PDF Downloads 387
1824 GIS Technology for Environmentally Polluted Sites with Innovative Process to Improve the Quality and Assesses the Environmental Impact Assessment (EIA)

Authors: Hamad Almebayedh, Chuxia Lin, Yu wang

Abstract:

The environmental impact assessment (EIA) must be improved, assessed, and quality checked for human and environmental health and safety. Soil contamination is expanding, and sites and soil remediation activities proceeding around the word which simplifies the answer “quality soil characterization” will lead to “quality EIA” to illuminate the contamination level and extent and reveal the unknown for the way forward to remediate, countifying, containing, minimizing and eliminating the environmental damage. Spatial interpolation methods play a significant role in decision making, planning remediation strategies, environmental management, and risk assessment, as it provides essential elements towards site characterization, which need to be informed into the EIA. The Innovative 3D soil mapping and soil characterization technology presented in this research paper reveal the unknown information and the extent of the contaminated soil in specific and enhance soil characterization information in general which will be reflected in improving the information provided in developing the EIA related to specific sites. The foremost aims of this research paper are to present novel 3D mapping technology to quality and cost-effectively characterize and estimate the distribution of key soil characteristics in contaminated sites and develop Innovative process/procedure “assessment measures” for EIA quality and assessment. The contaminated site and field investigation was conducted by innovative 3D mapping technology to characterize the composition of petroleum hydrocarbons contaminated soils in a decommissioned oilfield waste pit in Kuwait. The results show the depth and extent of the contamination, which has been interred into a developed assessment process and procedure for the EIA quality review checklist to enhance the EIA and drive remediation and risk assessment strategies. We have concluded that to minimize the possible adverse environmental impacts on the investigated site in Kuwait, the soil-capping approach may be sufficient and may represent a cost-effective management option as the environmental risk from the contaminated soils is considered to be relatively low. This research paper adopts a multi-method approach involving reviewing the existing literature related to the research area, case studies, and computer simulation.

Keywords: quality EIA, spatial interpolation, soil characterization, contaminated site

Procedia PDF Downloads 88
1823 Noise Reduction by Energising the Boundary Layer

Authors: Kiran P. Kumar, H. M. Nayana, R. Rakshitha, S. Sushmitha

Abstract:

Aircraft noise is a highly concerned problem in the field of the aviation industry. It is necessary to reduce the noise in order to be environment-friendly. Air-frame noise is caused because of the quick separation of the boundary layer over an aircraft body. So, we have to delay the boundary layer separation of an air-frame and engine nacelle. By following a certain procedure boundary layer separation can be reduced by converting laminar into turbulent and hence early separation can be prevented that leads to the noise reduction. This method has a tendency to reduce the noise of the aircraft hence it can prove efficient and environment-friendly than the present Aircraft.

Keywords: airframe, boundary layer, noise, reduction

Procedia PDF Downloads 484
1822 An Investigation about Rate Of Evaporation from the Water Surface and LNG Pool

Authors: Farokh Alipour, Ali Falavand, Neda Beit Saeid

Abstract:

The calculation of the effect of accidental releases of flammable materials such as LNG requires the use of a suitable consequence model. This study is due to providing a planning advice for developments in the vicinity of LNG sites and other sites handling flammable materials. In this paper, an applicable algorithm that is able to model pool fires on water is presented and applied to estimate pool fire damage zone. This procedure can be used to model pool fires on land and could be helpful in consequence modeling and domino effect zone measurements of flammable materials which is needed in site selection and plant layout.

Keywords: LNG, pool fire, spill, radiation

Procedia PDF Downloads 404
1821 Multi-Objective Optimization of Combined System Reliability and Redundancy Allocation Problem

Authors: Vijaya K. Srivastava, Davide Spinello

Abstract:

This paper presents established 3n enumeration procedure for mixed integer optimization problems for solving multi-objective reliability and redundancy allocation problem subject to design constraints. The formulated problem is to find the optimum level of unit reliability and the number of units for each subsystem. A number of illustrative examples are provided and compared to indicate the application of the superiority of the proposed method.

Keywords: integer programming, mixed integer programming, multi-objective optimization, Reliability Redundancy Allocation

Procedia PDF Downloads 172
1820 MIMO PID Controller of a Power Plant Boiler–Turbine Unit

Authors: N. Ben-Mahmoud, M. Elfandi, A. Shallof

Abstract:

This paper presents a methodology to design multivariable PID controllers for multi-input and multi-output systems. The proposed control strategy, which is centralized, combines of PID controllers. The proportional gains in the P controllers act as tuning parameters of (SISO) in order to modify the behavior of the loops almost independently. The design procedure consists of three steps: first, an ideal decoupler including integral action is determined. Second, the decoupler is approximated with PID controllers. Third, the proportional gains are tuned to achieve the specified performance. The proposed method is applied to representative processes.

Keywords: boiler turbine, MIMO, PID controller, control by decoupling, anti wind-up techniques

Procedia PDF Downloads 328
1819 Validation of a Placebo Method with Potential for Blinding in Ultrasound-Guided Dry Needling

Authors: Johnson C. Y. Pang, Bo Peng, Kara K. L. Reeves, Allan C. L. Fud

Abstract:

Objective: Dry needling (DN) has long been used as a treatment method for various musculoskeletal pain conditions. However, the evidence level of the studies was low due to the limitations of the methodology. Lack of randomization and inappropriate blinding is potentially the main sources of bias. A method that can differentiate clinical results due to the targeted experimental procedure from its placebo effect is needed to enhance the validity of the trial. Therefore, this study aimed to validate the method as a placebo ultrasound(US)-guided DN for patients with knee osteoarthritis (KOA). Design: This is a randomized controlled trial (RCT). Ninety subjects (25 males and 65 females) aged between 51 and 80 (61.26 ± 5.57) with radiological KOA were recruited and randomly assigned into three groups with a computer program. Group 1 (G1) received real US-guided DN, Group 2 (G2) received placebo US-guided DN, and Group 3 (G3) was the control group. Both G1 and G2 subjects received the same procedure of US-guided DN, except the US monitor was turned off in G2, blinding the G2 subjects to the incorporation of faux US guidance. This arrangement created the placebo effect intended to permit comparison of their results to those who received actual US-guided DN. Outcome measures, including the visual analog scale (VAS) and Knee injury and Osteoarthritis Outcome Score (KOOS) subscales of pain, symptoms, and quality of life (QOL), were analyzed by repeated measures analysis of covariance (ANCOVA) for time effects and group effects. The data regarding the perception of receiving real US-guided DN or placebo US-guided DN were analyzed by the chi-squared test. The missing data were analyzed with the intention-to-treat (ITT) approach if more than 5% of the data were missing. Results: The placebo US-guided DN (G2) subjects had the same perceptions as the use of real US guidance in the advancement of DN (p<0.128). G1 had significantly higher pain reduction (VAS and KOOS-pain) than G2 and G3 at 8 weeks (both p<0.05) only. There was no significant difference between G2 and G3 at 8 weeks (both p>0.05). Conclusion: The method with the US monitor turned off during the application of DN is credible for blinding the participants and allowing researchers to incorporate faux US guidance. The validated placebo US-guided DN technique can aid in investigations of the effects of US-guided DN with short-term effects of pain reduction for patients with KOA. Acknowledgment: This work was supported by the Caritas Institute of Higher Education [grant number IDG200101].

Keywords: ultrasound-guided dry needling, dry needling, knee osteoarthritis, physiotheraphy

Procedia PDF Downloads 120
1818 The Mapping of Pastoral Area as a Basis of Ecological for Beef Cattle in Pinrang Regency, South Sulawesi, Indonesia

Authors: Jasmal A. Syamsu, Muhammad Yusuf, Hikmah M. Ali, Mawardi A. Asja, Zulkharnaim

Abstract:

This study was conducted and aimed in identifying and mapping the pasture as an ecological base of beef cattle. A survey was carried out during a period of April to June 2016, in Suppa, Mattirobulu, the district of Pinrang, South Sulawesi province. The mapping process of grazing area was conducted in several stages; inputting and tracking of data points into Google Earth Pro (version 7.1.4.1529), affirmation and confirmation of tracking line visualized by satellite with a variety of records at the point, a certain point and tracking input data into ArcMap Application (ArcGIS version 10.1), data processing DEM/SRTM (S04E119) with respect to the location of the grazing areas, creation of a contour map (a distance of 5 m) and mapping tilt (slope) of land and land cover map-making. Analysis of land cover, particularly the state of the vegetation was done through the identification procedure NDVI (Normalized Differences Vegetation Index). This procedure was performed by making use of the Landsat-8. The results showed that the topography of the grazing areas of hills and some sloping surfaces and flat with elevation vary from 74 to 145 above sea level (asl), while the requirements for growing superior grass and legume is an altitude of up to 143-159 asl. Slope varied between 0 - > 40% and was dominated by a slope of 0-15%, according to the slope/topography pasture maximum of 15%. The range of NDVI values for pasture image analysis results was between 0.1 and 0.27. Characteristics of vegetation cover of pasture land in the category of vegetation density were low, 70% of the land was the land for cattle grazing, while the remaining approximately 30% was a grove and forest included plant water where the place for shelter of the cattle during the heat and drinking water supply. There are seven types of graminae and 5 types of legume that was dominant in the region. Proportionally, graminae class dominated up 75.6% and legume crops up to 22.1% and the remaining 2.3% was another plant trees that grow in the region. The dominant weed species in the region were Cromolaenaodorata and Lantana camara, besides that there were 6 types of floor plant that did not include as forage fodder.

Keywords: pastoral, ecology, mapping, beef cattle

Procedia PDF Downloads 355
1817 To Present and Explain Effective Methods in Teaching Social Science

Authors: Sulmaz Mozaffari, Zahra Mozaffari, Saman Mozaffari

Abstract:

Training is a counting and orderly process which purpose is to grow all as peals of the students to get the human knowledge and have the social norms. Also to help them grow their talents. Social science as in educational and training science at the sometime is very important for schools and universities. Unfortunately the method which is mostly used for teaching and training at present is student- teacher method and because of its ease the other methods are ignored. This research is to consider the most efficient methods in social science and analyse them. The Results show that the best methods in which the students are present during the teaching procedure.

Keywords: social science, methodology, student base methodology, technology

Procedia PDF Downloads 437
1816 Nullity of t-Tupple Graphs

Authors: Khidir R. Sharaf, Didar A. Ali

Abstract:

The nullity η (G) of a graph is the occurrence of zero as an eigenvalue in its spectra. A zero-sum weighting of a graph G is real valued function, say f from vertices of G to the set of real numbers, provided that for each vertex of G the summation of the weights f (w) over all neighborhood w of v is zero for each v in G.A high zero-sum weighting of G is one that uses maximum number of non-zero independent variables. If G is graph with an end vertex, and if H is an induced sub-graph of G obtained by deleting this vertex together with the vertex adjacent to it, then, η(G)= η(H). In this paper, a high zero-sum weighting technique and the end vertex procedure are applied to evaluate the nullity of t-tupple and generalized t-tupple graphs are derived and determined for some special types of graphs. Also, we introduce and prove some important results about the t-tupple coalescence, Cartesian and Kronecker products of nut graphs.

Keywords: graph theory, graph spectra, nullity of graphs, statistic

Procedia PDF Downloads 242
1815 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices

Authors: Alena Kulikova, Tatjana Kanonire

Abstract:

Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.

Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing

Procedia PDF Downloads 80
1814 T3P® -DMSO Mediated One-Pot Tandem Approach for the Synthesis of 3,4-Dihydropyrimidin-2(1H)-Ones/Thiones from Alcohols

Authors: Vinaya Kambappa

Abstract:

Propylphosphonic anhydride (T3P®)-DMSO is used as an efficient and mild reagent for the one-pot synthesis of 3,4-dihydropyrimidin-2(1H)-ones/thiones from aromatic alcohols. Alcohols are oxidized in situ to aldehydes under mild conditions, which in turn undergo a three-component reaction with β-ketoester and urea/thiourea to afford 3,4-dihydropyrimidin-2(1H)-ones/thiones. The synthesis of 3,4-dihydropyrimidin-2(1H)-ones/thiones directly from alcohols has been reported for the first time best to our knowledge, under mild reaction conditions in good yield. The easy work-up procedure, low cost and less toxicity of the reagent are the main advantages of this protocol.

Keywords: β-ketoester, propylphosphonic anhydride, three-component reaction, pyrimidine

Procedia PDF Downloads 149
1813 Leveraging Unannotated Data to Improve Question Answering for French Contract Analysis

Authors: Touila Ahmed, Elie Louis, Hamza Gharbi

Abstract:

State of the art question answering models have recently shown impressive performance especially in a zero-shot setting. This approach is particularly useful when confronted with a highly diverse domain such as the legal field, in which it is increasingly difficult to have a dataset covering every notion and concept. In this work, we propose a flexible generative question answering approach to contract analysis as well as a weakly supervised procedure to leverage unannotated data and boost our models’ performance in general, and their zero-shot performance in particular.

Keywords: question answering, contract analysis, zero-shot, natural language processing, generative models, self-supervision

Procedia PDF Downloads 196
1812 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes

Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi

Abstract:

Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.

Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation

Procedia PDF Downloads 292
1811 Evaluating Radiation Dose for Interventional Radiologists Performing Spine Procedures

Authors: Kholood A. Baron

Abstract:

While radiologist numbers specialized in spine interventional procedures are limited in Kuwait, the number of patients demanding these procedures is increasing rapidly. Due to this high demand, the workload of radiologists is increasing, which might represent a radiation exposure concern. During these procedures, the doctor’s hands are in very close proximity to the main radiation beam/ if not within it. The aim of this study is to measure the radiation dose for radiologists during several interventional procedures for the spine. Methods: Two doctors carrying different workloads were included. (DR1) was performing procedures in the morning and afternoon shifts, while (DR2) was performing procedures in the morning shift only. Comparing the radiation exposures that the hand of each doctor is receiving will assess radiation safety and help to set up workload regulations for radiologists carrying a heavy schedule of such procedures. Entrance Skin Dose (ESD) was measured via TLD (ThermoLuminescent Dosimetry) placed at the right wrist of the radiologists. DR1 was covering the morning shift in one hospital (Mubarak Al-Kabeer Hospital) and the afternoon shift in another hospital (Dar Alshifa Hospital). The TLD chip was placed in his gloves during the 2 shifts for a whole week. Since DR2 was covering the morning shift only in Al Razi Hospital, he wore the TLD during the morning shift for a week. It is worth mentioning that DR1 was performing 4-5 spine procedures/day in the morning and the same number in the afternoon and DR2 was performing 5-7 procedures/day. This procedure was repeated for 4 consecutive weeks in order to calculate the ESD value that a hand receives in a month. Results: In general, radiation doses that the hand received in a week ranged from 0.12 to 1.12 mSv. The ESD values for DR1 for the four consecutive weeks were 1.12, 0.32, 0.83, 0.22 mSv, thus for a month (4 weeks), this equals 2.49 mSv and calculated to be 27.39 per year (11 months-since each radiologist have 45 days of leave in each year). For DR2, the weekly ESD values are 0.43, 0.74, 0.12, 0.61 mSv, and thus, for a month, this equals 1.9 mSv, and for a year, this equals 20.9 mSv /year. These values are below the standard level and way below the maximum limit of 500 mSv per year (set by ICRP = International Council of Radiation Protection). However, it is worth mentioning that DR1 was a senior consultant and hence needed less fluoro-time during each procedure. This is evident from the low ESD values of the second week (0.32) and the fourth week (0.22), even though he was performing nearly 10-12 procedures in a day /5 days a week. These values were lower or in the same range as those for DR2 (who was a junior consultant). This highlighted the importance of increasing the radiologist's skills and awareness of fluoroscopy time effect. In conclusion, the radiation dose that radiologists received during spine interventional radiology in our setting was below standard dose limits.

Keywords: radiation protection, interventional radiology dosimetry, ESD measurements, radiologist radiation exposure

Procedia PDF Downloads 59
1810 Enhanced Face Recognition with Daisy Descriptors Using 1BT Based Registration

Authors: Sevil Igit, Merve Meric, Sarp Erturk

Abstract:

In this paper, it is proposed to improve Daisy descriptor based face recognition using a novel One-Bit Transform (1BT) based pre-registration approach. The 1BT based pre-registration procedure is fast and has low computational complexity. It is shown that the face recognition accuracy is improved with the proposed approach. The proposed approach can facilitate highly accurate face recognition using DAISY descriptor with simple matching and thereby facilitate a low-complexity approach.

Keywords: face recognition, Daisy descriptor, One-Bit Transform, image registration

Procedia PDF Downloads 368
1809 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging

Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati

Abstract:

Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.

Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization

Procedia PDF Downloads 75
1808 Methodology for Various Sand Cone Testing

Authors: Abel S. Huaynacho, Yoni D. Huaynacho

Abstract:

The improvement of procedure test ASTM D1556, plays an important role in the developing of testing in field to obtain a higher quality of data QA/QC. The traditional process takes a considerable amount of time for only one test. Even making various testing are tasks repeating and it takes a long time to obtain better results. Moreover, if the adequate tools the help these testing are not properly managed, the improvement in the development for various testing could be stooped. This paper presents an optimized process for various testing ASTM D1556 which uses an initial standard process to another one the uses a simpler and improved management tools.

Keywords: cone sand test, density bulk, ASTM D1556, QA/QC

Procedia PDF Downloads 139
1807 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.

Keywords: base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability

Procedia PDF Downloads 280