Search results for: exogenous variable
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2280

Search results for: exogenous variable

330 Assessing and Managing the Risk of Inland Acid Sulfate Soil Drainage via Column Leach Tests and 1D Modelling: A Case Study from South East Australia

Authors: Nicolaas Unland, John Webb

Abstract:

The acidification and mobilisation of metals during the oxidation of acid sulfate soils exposed during lake bed drying is an increasingly common phenomenon under climate scenarios with reduced rainfall. In order to assess the risk of generating high concentrations of acidity and dissolved metals, chromium suite analysis are fundamental, but sometimes limited in characterising the potential risks they pose. This study combines such fundamental test work, along with incubation tests and 1D modelling to investigate the risks associated with the drying of Third Reedy Lake in South East Australia. Core samples were collected from a variable depth of 0.5 m below the lake bed, at 19 locations across the lake’s footprint, using a boat platform. Samples were subjected to a chromium suite of analysis, including titratable actual acidity, chromium reducible sulfur and acid neutralising capacity. Concentrations of reduced sulfur up to 0.08 %S and net acidities up to 0.15 %S indicate that acid sulfate soils have formed on the lake bed during permanent inundation over the last century. A further sub-set of samples were prepared in 7 columns and subject to accelerated heating, drying and wetting over a period of 64 days in laboratory. Results from the incubation trial indicate that while pyrite oxidation proceeded, minimal change to soil pH or the acidity of leachate occurred, suggesting that the internal buffering capacity of lake bed sediments was sufficient to neutralise a large proportion of the acidity produced. A 1D mass balance model was developed to assess potential changes in lake water quality during drying based on the results of chromium suite and incubation tests. Results from the above test work and modelling suggest that acid sulfate soils pose a moderate to low risk to the Third Reedy Lake system. Further, the risks can be effectively managed during the initial stages of lake drying via flushing with available mildly alkaline water. The study finds that while test work such as chromium suite analysis are fundamental in characterizing acid sulfate soil environments, they can the overestimate risks associated with the soils. Subsequent incubation test work may more accurately characterise such soils and lead to better-informed management strategies.

Keywords: acid sulfate soil, incubation, management, model, risk

Procedia PDF Downloads 341
329 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 110
328 Commercial Winding for Superconducting Cables and Magnets

Authors: Glenn Auld Knierim

Abstract:

Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.

Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable

Procedia PDF Downloads 119
327 Soil Improvement through Utilization of Calcifying Bhargavaea cecembensis N1 in an Affordable Whey Culture Medium

Authors: Fatemeh Elmi, Zahra Etemadifar

Abstract:

Improvement of soil mechanical properties is crucial before its use in construction, as the low mechanical strength and unstable structure of soil in many parts of the world can lead to the destruction of engineering infrastructure, resulting in financial and human losses. Although, conventional methods, such as chemical injection, are often utilized to enhance soil strength and stiffness, they are generally expensive, require heavy machinery, and cause significant environmental effects due to chemical usage, and also disrupt urban infrastructure. Moreover, they are not suitable for treating large volume of soil. Recently, an alternative method to improve various soil properties, including strength, hardness, and permeability, has received much attention: the application of biological methods. One of the most widely used is biocementation, which is based on the microbial precipitation of calcium carbonte crystalls using ureolytic bacteria However, there are still limitations to its large-scale use that need to be resolved before it can be commercialized. These issues have not received enough attention in prior research. One limitation of MICP (microbially induced calcium carbonate precipitation) is that microorganisms cannot operate effectively in harsh and variable environments, unlike the controlled conditions of a laboratory. Another limitation of applying this technique on a large scale is the high cost of producing a substantial amount of bacterial culture and reagents required for soil treatment. Therefore, the purpose of the present study was to investigate soil improvement using the biocementation activity of poly-extremophile, calcium carbonate crystal- producing bacterial strain, Bhargavaea cecembensis N1, in whey as an inexpensive medium. This strain was isolated and molecularly identified from sandy soils in our previous research, and its 16S rRNA gene sequences was deposited in the NCBI Gene Bank with an accession number MK420385. This strain exhibited a high level of urease activity (8.16 U/ml) and produced a large amount of calcium carbonate (4.1 mg/ ml). It was able to improve the soil by increasing the compressive strength up to 205 kPa and reducing permeability by 36%, with 20% of the improvement attributable of calcium carbonate production. This was achieved using this strain in a whey culture medium. This strain can be an eco-friendly and economical alternative to conventional methods in soil stabilization, and other MICP related applications.

Keywords: biocementation, Bhargavaea cecembensis, soil improvement, whey culture medium

Procedia PDF Downloads 26
326 Comparison of Traditional and Green Building Designs in Egypt: Energy Saving

Authors: Hala M. Abdel Mageed, Ahmed I. Omar, Shady H. E. Abdel Aleem

Abstract:

This paper describes in details a commercial green building that has been designed and constructed in Marsa Matrouh, Egypt. The balance between homebuilding and the sustainable environment has been taken into consideration in the design and construction of this building. The building consists of one floor with 3 m height and 2810 m2 area while the envelope area is 1400 m2. The building construction fulfills the natural ventilation requirements. The glass curtain walls are about 50% of the building and the windows area is 300 m2. 6 mm greenish gray tinted temper glass as outer board lite, 6 mm safety glass as inner board lite and 16 mm thick dehydrated air spaces are used in the building. Visible light with 50% transmission, 0.26 solar factor, 0.67 shading coefficient and 1.3 W/m2.K thermal insulation U-value are implemented to realize the performance requirements. Optimum electrical distribution for lighting system, air conditions and other electrical loads has been carried out. Power and quantity of each type of the lighting system lamps and the energy consumption of the lighting system are investigated. The design of the air conditions system is based on summer and winter outdoor conditions. Ventilated, air conditioned spaces and fresh air rates are determined. Variable Refrigerant Flow (VRF) is the air conditioning system used in this building. The VRF outdoor units are located on the roof of the building and connected to indoor units through refrigerant piping. Indoor units are distributed in all building zones through ducts and air outlets to ensure efficient air distribution. The green building energy consumption is evaluated monthly all over one year and compared with the consumed energy in the non-green conditions using the Hourly Analysis Program (HAP) model. The comparison results show that the total energy consumed per year in the green building is about 1,103,221 kWh while the non-green energy consumption is about 1,692,057 kWh. In other words, the green building total annual energy cost is reduced from 136,581 $ to 89,051 $. This means that, the energy saving and consequently the money-saving of this green construction is about 35%. In addition, 13 points are awarded by applying one of the most popular worldwide green energy certification programs (Leadership in Energy and Environmental Design “LEED”) as a rating system for the green construction. It is concluded that this green building ensures sustainability, saves energy and offers an optimum energy performance with minimum cost.

Keywords: energy consumption, energy saving, green building, leadership in energy and environmental design, sustainability

Procedia PDF Downloads 275
325 The Effect and Durability of Functional Exercises on Balance Evaluation Systems Test (Bestest) in Intellectual Disabilities: A Preliminary Report

Authors: Saeid Bahiraei, Hassan Daneshmandi , Ali Asghar Norasteh

Abstract:

The present study aims at the effects of 8 weeks of selected corrective exercise training in stable and unstable levels on the postural control people with ID. Problems and limitations of movement in individuals with intellectual disability (ID) are highly common, which particularly may cause the loss of basic performance and limitation of the person's independence in doing their daily activities. In the present study, thirty-four young adult intellectual disabilities were selected randomly and divided into three groups. In order to measure the balance variable indicators, BESTest was used. The intervention group did the selected performance exercise in 8 weeks (3 times of 45 to 50 minutes a week). Meanwhile, the control group did not experience any kind of exercise. Statistical analysis was performed in SPSS on a significant level (p<0/05). The results showed the compromise between time and the group in all the BESTest tests is significant (P=0/001). The results of the research test compared to the studied groups with time measurements showed that there is a significant difference in the unstable group in Biomechanical constraints (P<0/05). And also, a significant difference exists in the stable and unstable level instability limits/Vertically, Postural responses, and Anticipatory postural adjustment variables (except for the follow-up and pre-test levels), Stability in Gait and Sensory Orientation in the pre-test, post-test, and follow up- pre-test stage of the test (P<0/05). In the comparison between the times of measurement with the groups under study, the results showed that Biomechanical Constraints, Anticipatory Postural adjustment and Postural responses at the pre-test-follow upstage, there was a significant difference between unstable-stable and unstable-control groups (P<0/05), it was also significant between all groups in Stability Limits/Vertically, Sensory Orientation, Stability in Gait and Overall stability index variables (P<0/05). The findings showed that the practice group at an unstable level has move improvement compared to the practice group at a stable level. In conclusion, this study presents evidence that shows selected performative practices can be recognized as a comprehensive and effective mediator in the betterment and improvement of the balance in intellectually disabled people and also affect the performative and moving activities.

Keywords: intellectual disability, BSETest, rehabilitation, postural control

Procedia PDF Downloads 153
324 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions

Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen

Abstract:

Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.

Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation

Procedia PDF Downloads 239
323 Field Environment Sensing and Modeling for Pears towards Precision Agriculture

Authors: Tatsuya Yamazaki, Kazuya Miyakawa, Tomohiko Sugiyama, Toshitaka Iwatani

Abstract:

The introduction of sensor technologies into agriculture is a necessary step to realize Precision Agriculture. Although sensing methodologies themselves have been prevailing owing to miniaturization and reduction in costs of sensors, there are some difficulties to analyze and understand the sensing data. Targeting at pears ’Le Lectier’, which is particular to Niigata in Japan, cultivation environmental data have been collected at pear fields by eight sorts of sensors: field temperature, field humidity, rain gauge, soil water potential, soil temperature, soil moisture, inner-bag temperature, and inner-bag humidity sensors. With regard to the inner-bag temperature and humidity sensors, they are used to measure the environment inside the fruit bag used for pre-harvest bagging of pears. In this experiment, three kinds of fruit bags were used for the pre-harvest bagging. After over 100 days continuous measurement, volumes of sensing data have been collected. Firstly, correlation analysis among sensing data measured by respective sensors reveals that one sensor can replace another sensor so that more efficient and cost-saving sensing systems can be proposed to pear farmers. Secondly, differences in characteristic and performance of the three kinds of fruit bags are clarified by the measurement results by the inner-bag environmental sensing. It is found that characteristic and performance of the inner-bags significantly differ from each other by statistical analysis. Lastly, a relational model between the sensing data and the pear outlook quality is established by use of Structural Equation Model (SEM). Here, the pear outlook quality is related with existence of stain, blob, scratch, and so on caused by physiological impair or diseases. Conceptually SEM is a combination of exploratory factor analysis and multiple regression. By using SEM, a model is constructed to connect independent and dependent variables. The proposed SEM model relates the measured sensing data and the pear outlook quality determined on the basis of farmer judgement. In particularly, it is found that the inner-bag humidity variable relatively affects the pear outlook quality. Therefore, inner-bag humidity sensing might help the farmers to control the pear outlook quality. These results are supported by a large quantity of inner-bag humidity data measured over the years 2014, 2015, and 2016. The experimental and analytical results in this research contribute to spreading Precision Agriculture technologies among the farmers growing ’Le Lectier’.

Keywords: precision agriculture, pre-harvest bagging, sensor fusion, structural equation model

Procedia PDF Downloads 284
322 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards

Authors: Nadezhda Kvatashidze, Elena Kharabadze

Abstract:

It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.

Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value

Procedia PDF Downloads 293
321 Equity, Bonds, Institutional Debt and Economic Growth: Evidence from South Africa

Authors: Ashenafi Beyene Fanta, Daniel Makina

Abstract:

Economic theory predicts that finance promotes economic growth. Although the finance-growth link is among the most researched areas in financial economics, our understanding of the link between the two is still incomplete. This is caused by, among others, wrong econometric specifications, using weak proxies of financial development, and inability to address the endogeneity problem. Studies on the finance growth link in South Africa consistently report economic growth driving financial development. Early studies found that economic growth drives financial development in South Africa, and recent studies have confirmed this using different econometric models. However, the monetary aggregate (i.e. M2) utilized used in these studies is considered a weak proxy for financial development. Furthermore, the fact that the models employed do not address the endogeneity problem in the finance-growth link casts doubt on the validity of the conclusions. For this reason, the current study examines the finance growth link in South Africa using data for the period 1990 to 2011 by employing a generalized method of moments (GMM) technique that is capable of addressing endogeneity, simultaneity and omitted variable bias problems. Unlike previous cross country and country case studies that have also used the same technique, our contribution is that we account for the development of bond markets and non-bank financial institutions rather than being limited to stock market and banking sector development. We find that bond market development affects economic growth in South Africa, and no similar effect is observed for the bank and non-bank financial intermediaries and the stock market. Our findings show that examination of individual elements of the financial system is important in understanding the unique effect of each on growth. The observation that bond markets rather than private credit and stock market development promotes economic growth in South Africa induces an intriguing question as to what unique roles bond markets play that the intermediaries and equity markets are unable to play. Crucially, our results support observations in the literature that using appropriate measures of financial development is critical for policy advice. They also support the suggestion that individual elements of the financial system need to be studied separately to consider their unique roles in advancing economic growth. We believe that our understanding of the channels through which bond market contribute to growth would be a fertile ground for future research.

Keywords: bond market, finance, financial sector, growth

Procedia PDF Downloads 391
320 Behavior of GRS Abutment Facing under Variable Cycles of Lateral Excitation through Physical Model Tests

Authors: Ashutosh Verma, Satyendra Mittal

Abstract:

Numerous geosynthetic reinforced soil (GRS) abutment failures over the years have been attributed to the loss of strength at the facing-reinforcement interface due to seasonal thermal expansion/contraction of the bridge deck. This causes excessive settlement below the bridge seat, causing bridge bumps along the approach road which reduces the design life of any abutment. Before designers while choosing the type of facing, a broad range of facing configurations are undoubtedly available. Generally speaking, these configurations can be divided into three groups: modular (panels/block), continuous, and full height rigid (FHR). The purpose of the current study is to use 1g physical model tests under serviceable cyclic lateral displacements to experimentally investigate the behaviour of these three facing classifications. To simulate field behaviour, a field instrumented GRS abutment prototype was modeled into a N scaled down 1g physical model (N = 5) with adjustable facing arrangements to represent these three facing classifications. For cyclic lateral displacement (d/H) of top facing at loading rate of 1mm/min, the peak earth pressure coefficient (K) on the facing and vertical settlement of the footing (s/B) at 25, 50, 75 and 100 cycles have been measured. For a constant footing offset of x/H = 0.1, three forms of cyclic displacements have been performed to simulate active condition (CA), passive condition (CP), and active-passive condition (CAP). The findings showed that when reinforcements are integrated into the wall along with presence of gravel gabions i.e. FHR design, a rather substantial earth pressure occurs over the facing. Despite this, the FHR facing's continuous nature works in conjunction with the reinforcements' membrane resilience to reduce footing settlement. On the other hand, the pressure over the wall is released upon lateral excitation by the relative displacement between the panels in modular facing reducing the connection strength at the interface and leading to greater settlements below footing. On the contrary, continuous facing do not exhibit relative displacement along the depth of facing rather fails through rotation about the base, which extends the zone of active failure in the backfill leading to large depressions in the backfill region around the bridge seat. Conservatively, FHR facing shows relatively stable responses under lateral cyclic excitations as compared to modular or continuous type of abutment facing.

Keywords: GRS abutments, 1g physical model, full height rigid, cyclic lateral displacement

Procedia PDF Downloads 54
319 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 119
318 The Effect of Applying the Electronic Supply System on the Performance of the Supply Chain in Health Organizations

Authors: Sameh S. Namnqani, Yaqoob Y. Abobakar, Ahmed M. Alsewehri, Khaled M. AlQethami

Abstract:

The main objective of this research is to know the impact of the application of the electronic supply system on the performance of the supply department of health organizations. To reach this goal, the study adopted independent variables to measure the dependent variable (performance of the supply department), namely: integration with suppliers, integration with intermediaries and distributors and knowledge of supply size, inventory, and demand. The study used the descriptive method and was aided by the questionnaire tool that was distributed to a sample of workers in the Supply Chain Management Department of King Abdullah Medical City. After the statistical analysis, the results showed that: The 70 sample members strongly agree with the (electronic integration with suppliers) axis with a p-value of 0.001, especially with regard to the following: Opening formal and informal communication channels between management and suppliers (Mean 4.59) and exchanging information with suppliers with transparency and clarity (Mean 4.50). It also clarified that the sample members agree on the axis of (electronic integration with brokers and distributors) with a p-value of 0.001 and this is represented in the following elements: Exchange of information between management, brokers and distributors with transparency, clarity (Mean 4.18) , and finding a close cooperation relationship between management, brokers and distributors (Mean 4.13). The results also indicated that the respondents agreed to some extent on the axis (knowledge of the size of supply, stock, and demand) with a p-value of 0.001. It also indicated that the respondents strongly agree with the existence of a relationship between electronic procurement and (the performance of the procurement department in health organizations) with a p-value of 0.001, which is represented in the following: transparency and clarity in dealing with suppliers and intermediaries to prevent fraud and manipulation (Mean 4.50) and reduce the costs of supplying the needs of the health organization (Mean 4.50). From the results, the study recommended several recommendations, the most important of which are: that health organizations work to increase the level of information sharing between them and suppliers in order to achieve the implementation of electronic procurement in the supply management of health organizations. Attention to using electronic data interchange methods and using modern programs that make supply management able to exchange information with brokers and distributors to find out the volume of supply, inventory, and demand. To know the volume of supply, inventory, and demand, it recommended the application of scientific methods of supply for storage. Take advantage of information technology, for example, electronic data exchange techniques and documents, where it can help in contact with suppliers, brokers, and distributors, and know the volume of supply, inventory, and demand, which contributes to improving the performance of the supply department in health organizations.

Keywords: healthcare supply chain, performance, electronic system, ERP

Procedia PDF Downloads 117
317 Human Coronary Sinus Venous System as a Target for Clinical Procedures

Authors: Wiesława Klimek-Piotrowska, Mateusz K. Hołda, Mateusz Koziej, Katarzyna Piątek, Jakub Hołda

Abstract:

Introduction: The coronary sinus venous system (CSVS), which has always been overshadowed by the coronary arterial tree, has recently begun to attract more attention. Since it is a target for clinicians the knowledge of its anatomy is essential. Cardiac resynchronization therapy, catheter ablation of cardiac arrhythmias, defibrillation, perfusion therapy, mitral valve annuloplasty, targeted drug delivery, and retrograde cardioplegia administration are commonly used therapeutic methods involving the CSVS. The great variability in the course of coronary veins and tributaries makes the diagnostic and therapeutic processes difficult. Our aim was to investigate detailed anatomy of most common clinically used CSVS`s structures: the coronary sinus with its ostium, great cardiac vein, posterior vein of the left ventricle, middle cardiac vein and oblique vein of the left atrium. Methodology: This is a prospective study of 70 randomly selected autopsied hearts dissected from adult humans (Caucasian) aged 50.1±17.6 years old (24.3% females) with BMI=27.6±6.7 kg/m2. The morphology of the CSVS was assessed as well as its precise measurements were performed. Results: The coronary sinus (CS) with its ostium was present in all hearts. The mean CS ostium diameter was 9.9±2.5mm. Considered ostium was covered by its valve in 87.1% with mean valve height amounted 5.1±3.1mm. The mean percentage coverage of the CS ostium by the valve was 56%. The Vieussens valve was present in 71.4% and was unicuspid in 70%, bicuspid in 26% and tricuspid in 4% of hearts. The great cardiac vein was present in all cases. The oblique vein of the left atrium was observed in 84.3% of hearts with mean length amounted 20.2±9.3mm and mean ostium diameter 1.4±0.9mm. The average length of the CS (from the CS ostium to the Vieussens valve) was 31.1±9.5mm or (from the CS ostium to the ostium of the oblique vein of the left atrium) 28.9±10.1mm and both were correlated with the heart weight (r=0.47; p=0.00 and r=0.38; p=0.006 respectively). In 90.5% the ostium of the oblique vein of the left atrium was located proximally to the Vieussens valve, in remaining cases was distally. The middle cardiac vein was present in all hearts and its valve was noticed in more than half of all the cases (52.9%). The posterior vein of the left ventricle was observed in 91.4% of cases. Conclusions: The CSVS is vastly variable and none of basic hearts parameters is a good predictor of its morphology. The Vieussens valve could be a significant obstacle during CS cannulation. Caution should be exercised in this area to avoid coronary sinus perforation. Because of the higher incidence of the presence of the oblique vein of the left atrium than the Vieussens valve, the vein orifice is more useful in determining the CS length.

Keywords: cardiac resynchronization therapy, coronary sinus, Thebesian valve, Vieussens valve

Procedia PDF Downloads 271
316 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute

Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane

Abstract:

Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.

Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis

Procedia PDF Downloads 103
315 Staying When Everybody Else Is Leaving: Coping with High Out-Migration in Rural Areas of Serbia

Authors: Anne Allmrodt

Abstract:

Regions of South-East Europe are characterised by high out-migration for decades. The reasons for leaving range from the hope of a better work situation to a better health care system and beyond. In Serbia, this high out-migration hits the rural areas in particular so that the population number is in the red repeatedly. It might not be hard to guess that this negative population growth has the potential to create different challenges for those who stay in rural areas. So how are they coping with the – statistically proven – high out-migration? Having this in mind, the study is investigating the people‘s individual awareness of the social phenomenon high out-migration and their daily life strategies in rural areas. Furthermore, the study seeks to find out the people’s resilient skills in that context. Is the condition of high out-migration conducive for resilience? The methodology combines a quantitative and a qualitative approach (mixed methods). For the quantitative part, a standardised questionnaire has been developed, including a multiple choice section and a choice experiment. The questionnaire was handed out to people living in rural areas of Serbia only (n = 100). The sheet included questions about people’s awareness of high out-migration, their own daily life strategies or challenges and their social network situation (data about the social network was necessary here since it is supposed to be an influencing variable for resilience). Furthermore, test persons were asked to make different choices of coping with high out-migration in a self-designed choice experiment. Additionally, the study included qualitative interviews asking citizens from rural areas of Serbia. The topics asked during the interview focused on their awareness of high out-migration, their daily life strategies, and challenges as well as their social network situation. Results have shown the following major findings. The awareness of high out-migration is not the same with all test persons. Some declare it as something positive for their own life, others as negative or not effecting at all. The way of coping generally depended – maybe not surprising – on the people’s social network. However – and this might be the most important finding - not everybody with a certain number of contacts had better coping strategies and was, therefore, more resilient. Here the results show that especially people with high affiliation and proximity inside their network were able to cope better and shew higher resilience skills. The study took one step forward in terms of knowledge about societal resilience as well as coping strategies of societies in rural areas. It has shown part of the other side of nowadays migration‘s coin and gives a hint for a more sustainable rural development and community empowerment.

Keywords: coping, out-migration, resilience, rural development, social networks, south-east Europe

Procedia PDF Downloads 97
314 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 62
313 Human Resource Management Functions; Employee Performance; Professional Health Workers In Public District Hospitals

Authors: Benjamin Mugisha Bugingo

Abstract:

Healthcare staffhas been considered as asignificant pillar to the health care system. However, the contest of human resources for health in terms of the turnover of health workers in Uganda has been more distinct in the latest years. The objective of the paper, therefore, were to investigate the influence Role Human resource management functions in on employeeperformance of professional health workers in public district hospitals in Kampala. The study objectives were: to establish the effect of performance management function, financialincentives, non-financial incentives, participation, and involvement in the decision-making on the employee performance of professional health workers in public district hospitals in Kampala. The study was devised in the social exchange theory and the equity theory. This study adopted a descriptive research design using quantitative approaches. The study used a cross-sectional research design with a mixed-methods approach. With a population of 402 individuals, the study considered a sample of 252 respondents, including doctors, nurses, midwives, pharmacists, and dentists from 3 district hospitals. The study instruments entailed a questionnaire as a quantitative data collection tool and interviews and focus group discussions as qualitative data gathering tools. To analyze quantitative data, descriptive statistics were used to assess the perceived status of Human resource management functions and the magnitude of intentions to stay, and inferential statistics were used to show the effect of predictors on the outcome variable by plotting a multiple linear regression. Qualitative data were analyzed in themes and reported in narrative and verbatim quotes and were used to complement descriptive findings for a better understanding of the magnitude of the study variables. The findings of this study showed a significant and positive effect of performance management function, financialincentives, non-financial incentives, and participation and involvement in decision-making on employee performance of professional health workers in public district hospitals in Kampala. This study is expected to be a major contributor for the improvement of the health system in the country and other similar settings as it has provided the insights for strategic orientation in the area of human resources for health, especially for enhanced employee performance in relation with the integrated human resource management approach

Keywords: human resource functions, employee performance, employee wellness, profecial workers

Procedia PDF Downloads 67
312 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 414
311 Development of Risk Index and Corporate Governance Index: An Application on Indian PSUs

Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav

Abstract:

Public Sector Undertakings (PSUs), being government-owned organizations have commitments for the economic and social wellbeing of the society; this commitment needs to be reflected in their risk-taking, decision-making and governance structures. Therefore, the primary objective of the study is to suggest measures that may lead to improvement in performance of PSUs. To achieve this objective two normative frameworks (one relating to risk levels and other relating to governance structure) are being put forth. The risk index is based on nine risks, such as, solvency risk, liquidity risk, accounting risk, etc. and each of the risks have been scored on a scale of 1 to 5. The governance index is based on eleven variables, such as, board independence, diversity, risk management committee, etc. Each of them are scored on a scale of 1 to five. The sample consists of 39 PSUs that featured in Nifty 500 index and, the study covers a 10 year period from April 1, 2005 to March, 31, 2015. Return on assets (ROA) and return on equity (ROE) have been used as proxies of firm performance. The control variables used in the model include, age of firm, growth rate of firm and size of firm. A dummy variable has also been used to factor in the effects of recession. Given the panel nature of data and possibility of endogeneity, dynamic panel data- generalized method of moments (Diff-GMM) regression has been used. It is worth noting that the corporate governance index is positively related to both ROA and ROE, indicating that with the improvement in governance structure, PSUs tend to perform better. Considering the components of CGI, it may be suggested that (i). PSUs ensure adequate representation of women on Board, (ii). appoint a Chief Risk Officer, and (iii). constitute a risk management committee. The results also indicate that there is a negative association between risk index and returns. These results not only validate the framework used to develop the risk index but also provide a yardstick to PSUs benchmark their risk-taking if they want to maximize their ROA and ROE. While constructing the CGI, certain non-compliances were observed, even in terms of mandatory requirements, such as, proportion of independent directors. Such infringements call for stringent penal provisions and better monitoring of PSUs. Further, if the Securities and Exchange Board of India (SEBI) and Ministry of Corporate Affairs (MCA) bring about such reforms in the PSUs and make mandatory the adherence to the normative frameworks put forth in the study, PSUs may have more effective and efficient decision-making, lower risks and hassle free management; all these ultimately leading to better ROA and ROE.

Keywords: PSU, risk governance, diff-GMM, firm performance, the risk index

Procedia PDF Downloads 135
310 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 204
309 Method for Controlling the Groundwater Polluted by the Surface Waters through Injection Wells

Authors: Victorita Radulescu

Abstract:

Introduction: The optimum exploitation of agricultural land in the presence of an aquifer polluted by the surface sources requires close monitoring of groundwater level in both periods of intense irrigation and in absence of the irrigations, in times of drought. Currently in Romania, in the south part of the country, the Baragan area, many agricultural lands are confronted with the risk of groundwater pollution in the absence of systematic irrigation, correlated with the climate changes. Basic Methods: The non-steady flow of the groundwater from an aquifer can be described by the Bousinesq’s partial differential equation. The finite element method was used, applied to the porous media needed for the water mass balance equation. By the proper structure of the initial and boundary conditions may be modeled the flow in drainage or injection systems of wells, according to the period of irrigation or prolonged drought. The boundary conditions consist of the groundwater levels required at margins of the analyzed area, in conformity to the reality of the pollutant emissaries, following the method of the double steps. Major Findings/Results: The drainage condition is equivalent to operating regimes on the two or three rows of wells, negative, as to assure the pollutant transport, modeled with the variable flow in groups of two adjacent nodes. In order to obtain the level of the water table, in accordance with the real constraints, are needed, for example, to be restricted its top level below of an imposed value, required in each node. The objective function consists of a sum of the absolute values of differences of the infiltration flow rates, increased by a large penalty factor when there are positive values of pollutant. In these conditions, a balanced structure of the pollutant concentration is maintained in the groundwater. The spatial coordinates represent the modified parameters during the process of optimization and the drainage flows through wells. Conclusions: The presented calculation scheme was applied to an area having a cross-section of 50 km between two emissaries with various levels of altitude and different values of pollution. The input data were correlated with the measurements made in-situ, such as the level of the bedrock, the grain size of the field, the slope, etc. This method of calculation can also be extended to determine the variation of the groundwater in the aquifer following the flood wave propagation in envoys.

Keywords: environmental protection, infiltrations, numerical modeling, pollutant transport through soils

Procedia PDF Downloads 126
308 A Study of the Depression Status of Asian American Adolescents

Authors: Selina Lin, Justin M Fan, Vincent Zhang, Cindy Chen, Daniel Lam, Jason Yan, Ning Zhang

Abstract:

Depression is one of the most common mental disorders in the United States, and past studies have shown a concerning increase in the rates of depression in youth populations over time. Furthermore, depression is an especially important issue for Asian Americans because of the anti-Asian violence taking place during the COVID-19 pandemic. While Asian American adolescents are reluctant to seek help for mental health issues, past research has found a prevalence of depressive symptoms in them that have yet to be fully investigated. There have been studies conducted to understand and observe the impacts of multifarious factors influencing the mental well-being of Asian American adolescents; however, they have been generally limited to qualitative investigation, and very few have attempted to quantitatively evaluate the relationship between depression levels and a comprehensive list of factors for those levels at the same time. To better quantify these relationships, this project investigated the prevalence of depression in Asian American teenagers mainly from the Greater Philadelphia Region, aged 12 to 19, and, with an anonymous survey, asked participants 48 multiple-choice questions pertaining to demographic information, daily behaviors, school life, family life, depression levels (quantified by the PHQ-9 assessment), school and family support against depression. Each multiple-choice question was assigned as a factor and variable for statistical and dominance analysis to determine the most influential factors on depression levels of Asian American adolescents. The results were validated via Bootstrap analysis and t-tests. While certain influential factors identified in this survey are consistent with the literature, such as parent-child relationship and peer pressure, several dominant factors were relatively overlooked in the past. These factors include the parents’ relationship with each other, the satisfaction with body image, sex identity, support from the family and support from the school. More than 25% of participants desired more support from their families and schools in handling depression issues. This study implied that it is beneficial for Asian American parents and adolescents to take programs on parents’ relationships with each other, parent-child communication, mental health, and sexual identity. A culturally inclusive school environment and more accessible mental health services would be helpful for Asian American adolescents to combat depression. This survey-based study paved the way for further investigation of effective approaches for helping Asian American adolescents against depression.

Keywords: Asian American adolescents, depression, dominance analysis, t-test, bootstrap analysis

Procedia PDF Downloads 110
307 Numerical Modeling and Experimental Analysis of a Pallet Isolation Device to Protect Selective Type Industrial Storage Racks

Authors: Marcelo Sanhueza Cartes, Nelson Maureira Carsalade

Abstract:

This research evaluates the effectiveness of a pallet isolation device for the protection of selective-type industrial storage racks. The device works only in the longitudinal direction of the aisle, and it is made up of a platform installed on the rack beams. At both ends, the platform is connected to the rack structure by means of a spring-damper system working in parallel. A system of wheels is arranged between the isolation platform and the rack beams in order to reduce friction, decoupling of the movement and improve the effectiveness of the device. The latter is evaluated by the reduction of the maximum dynamic responses of basal shear load and story drift in relation to those corresponding to the same rack with the traditional construction system. In the first stage, numerical simulations of industrial storage racks were carried out with and without the pallet isolation device. The numerical results allowed us to identify the archetypes in which it would be more appropriate to carry out experimental tests, thus limiting the number of trials. In the second stage, experimental tests were carried out on a shaking table to a select group of full-scale racks with and without the proposed device. The movement simulated by the shaking table was based on the Mw 8.8 magnitude earthquake of February 27, 2010, in Chile, registered at the San Pedro de la Paz station. The peak ground acceleration (PGA) was scaled in the frequency domain to fit its response spectrum with the design spectrum of NCh433. The experimental setup contemplates the installation of sensors to measure relative displacement and absolute acceleration. The movement of the shaking table with respect to the ground, the inter-story drift of the rack and the pallets with respect to the rack structure were recorded. Accelerometers redundantly measured all of the above in order to corroborate measurements and adequately capture low and high-frequency vibrations, whereas displacement and acceleration sensors are respectively more reliable. The numerical and experimental results allowed us to identify that the pallet isolation period is the variable with the greatest influence on the dynamic responses considered. It was also possible to identify that the proposed device significantly reduces both the basal cut and the maximum inter-story drift by up to one order of magnitude.

Keywords: pallet isolation system, industrial storage racks, basal shear load, interstory drift.

Procedia PDF Downloads 54
306 Time-Interval between Rectal Cancer Surgery and Reintervention for Anastomotic Leakage and the Effects of a Defunctioning Stoma: A Dutch Population-Based Study

Authors: Anne-Loes K. Warps, Rob A. E. M. Tollenaar, Pieter J. Tanis, Jan Willem T. Dekker

Abstract:

Anastomotic leakage after colorectal cancer surgery remains a severe complication. Early diagnosis and treatment are essential to prevent further adverse outcomes. In the literature, it has been suggested that earlier reintervention is associated with better survival, but anastomotic leakage can occur with a highly variable time interval to index surgery. This study aims to evaluate the time-interval between rectal cancer resection with primary anastomosis creation and reoperation, in relation to short-term outcomes, stratified for the use of a defunctioning stoma. Methods: Data of all primary rectal cancer patients that underwent elective resection with primary anastomosis during 2013-2019 were extracted from the Dutch ColoRectal Audit. Analyses were stratified for defunctioning stoma. Anastomotic leakage was defined as a defect of the intestinal wall or abscess at the site of the colorectal anastomosis for which a reintervention was required within 30 days. Primary outcomes were new stoma construction, mortality, ICU admission, prolonged hospital stay and readmission. The association between time to reoperation and outcome was evaluated in three ways: Per 2 days, before versus on or after postoperative day 5 and during primary versus readmission. Results: In total 10,772 rectal cancer patients underwent resection with primary anastomosis. A defunctioning stoma was made in 46.6% of patients. These patients had a lower anastomotic leakage rate (8.2% vs. 11.6%, p < 0.001) and less often underwent a reoperation (45.3% vs. 88.7%, p < 0.001). Early reoperations (< 5 days) had the highest complication and mortality rate. Thereafter the distribution of adverse outcomes was more spread over the 30-day postoperative period for patients with a defunctioning stoma. Median time-interval from primary resection to reoperation for defunctioning stoma patients was 7 days (IQR 4-14) versus 5 days (IQR 3-13 days) for no-defunctioning stoma patients. The mortality rate after primary resection and reoperation were comparable (resp. for defunctioning vs. no-defunctioning stoma 1.0% vs. 0.7%, P=0.106 and 5.0% vs. 2.3%, P=0.107). Conclusion: This study demonstrated that early reinterventions after anastomotic leakage are associated with worse outcomes (i.e. mortality). Maybe the combination of a physiological dip in the cellular immune response and release of cytokines following surgery, as well as a release of endotoxins caused by the bacteremia originating from the leakage, leads to a more profound sepsis. Another explanation might be that early leaks are not contained to the pelvis, leading to a more profound sepsis requiring early reoperations. Leakage with or without defunctioning stoma resulted in a different type of reinterventions and time-interval between surgery and reoperation.

Keywords: rectal cancer surgery, defunctioning stoma, anastomotic leakage, time-interval to reoperation

Procedia PDF Downloads 112
305 Analysis of Determinants of Growth of Small and Medium Enterprises in Kwara State, Nigeria

Authors: Hussaini Tunde Subairu

Abstract:

Small and Medium Enterprises (SMEs) sectors serve as catalyst for employment generation, national growth, poverty reduction and economic development in developing and developed countries. However, in Nigeria despite copious and plethora of government policies and stimulus schemes directed at SMEs, the sector is still characterized by high rate of failure and discontinuities. This study therefore investigated owners/managers profile, firms characteristics and external factors as possible determinants of SMEs growth from selected SMEs in Kwara State. Primary data were sourced from 200 SMEs respondents registered with the National Association of Small and Medium Enterprises (NASMES) in Kwara State Central Senatorial District. Multiple Regressions Analysis (MRA) was used to analyze the relationship between dependent and independent variables, and pair wise correlation was employed to examine the relationship among independent variables. The Analysis of Variable (ANOVA) was employed to indicate the overall significant of the model The findings revealed that Analysis of variance (ANOVA) put the value of F-statistics at 420.45 and p-value at 0.000 was significant. The values of R2 and Adjusted R2 of 0.9643 and 0.9620 respectively suggested that 96 percent of variations in employment growth were explained by the explanatory variables. The level of technical and managerial education has t- value of 24.14 and p-value of 0.001, length of managers/owners experience in similar trade with t- value of 21.37 and p-value of 0.001, age of managers/owners with t- value of 42.98 and p-value of 0.001, firm age with t- value of 25.91 and p-value of 0.001, numbers of firms in a cluster with t- value of 7.20 and p-value of 0.001, access to formal finance with t-value of 5.56 and p-value of 0.001, firm technology innovation with t- value of 25.32 and p-value of 0.01, institutional support with t- value of 18.89 and p-value of 0.01, globalization with t- value of 9.78 and p-value of 0.01, and infrastructure with t-value of 10.75 and p-value of 0.01. The result also indicated that initial size has t-value of -1.71 and p-value of 0.090 which is consistent with Gibrat’s Law. The study concluded that owners/managers profile, firm specific characteristics and external factors substantially influenced employment growths of SMEs in the study area. Therefore, policy implication should enhance human capital development of SMEs owners/managers, and strengthen fiscal policy thrust through imposition on tariff regime to minimize effect of globalization. Governments at all level must support SMEs growth radically and enhance institutional support for SMEs growth and radically and significantly upgrading key infrastructure as rail/roads, rail, telecommunications, water and power.

Keywords: external factors, firm specific characteristics, owners / manager profile, small and medium enterprises

Procedia PDF Downloads 218
304 One Year Follow up of Head and Neck Paragangliomas: A Single Center Experience

Authors: Cecilia Moreira, Rita Paiva, Daniela Macedo, Leonor Ribeiro, Isabel Fernandes, Luis Costa

Abstract:

Background: Head and neck paragangliomas are a rare group of tumors with a large spectrum of clinical manifestations. The approach to evaluate and treat these lesions has evolved over the last years. Surgery was the standard for the approach of these patients, but nowadays new techniques of imaging and radiation therapy changed that paradigm. Despite advances in treating, the growth potential and clinical outcome of individual cases remain largely unpredictable. Objectives: Characterization of our institutional experience with clinical management of these tumors. Methods: This was a cross-sectional study of patients followed in our institution between 01 January and 31 December 2017 with paragangliomas of the head and neck and cranial base. Data on tumor location, catecholamine levels, and specific imaging modalities employed in diagnostic workup, treatment modality, tumor control and recurrence, complications of treatment and hereditary status were collected and summarized. Results: A total of four female patients were followed between 01 January and 31 December 2017 in our institution. The mean age of our cohort was 53 (± 16.1) years. The primary locations were at the level of the tympanic jug (n=2, 50%) and carotid body (n=2, 50%), and only one of the tumors of the carotid body presented pulmonary metastasis at the time of diagnosis. None of the lesions were catecholamine-secreting. Two patients underwent genetic testing, with no mutations identified. The initial clinical presentation was variable highlighting the decrease of visual acuity and headache as symptoms present in all patients. In one of the cases, loss of all teeth of the lower jaw was the presenting symptomatology. Observation with serial imaging, surgical extirpation, radiation, and stereotactic radiosurgery were employed as treatment approaches according to anatomical location and resectability of lesions. As post-therapeutic sequels the persistence of tinnitus and disabling pain stands out, presenting one of the patients neuralgia of the glossopharyngeal. Currently, all patients are under regular surveillance with a median follow up of 10 months. Conclusion: Ultimately, clinical management of these tumors remains challenging owing to heterogeneity in clinical presentation, the existence of multiple treatment alternatives, and potential to cause serious detriment to critical functions and consequently interference with the quality of life of the patients.

Keywords: clinical outcomes, head and neck, management, paragangliomas

Procedia PDF Downloads 123
303 Reading Informational or Fictional Texts to Students: Choices and Perceptions of Preschool and Primary Grade Teachers

Authors: Anne-Marie Dionne

Abstract:

Teacher reading aloud to students is a practice that is well established in preschool and primary classrooms. Many benefits of this pedagogical activity have been highlighted in multiple studies. However, it has also been shown that teachers are not keen on choosing informational texts for their read aloud, as their selections for this venue are mainly fictional stories, mostly written in a unique narrative story-like structure. Considering that students soon have to read complex informational texts by themselves as they go from one grade to another, there is cause for concern because those who do not benefit from an early exposure to informational texts could be lacking knowledge of informational text structures that they will encounter regularly in their reading. Exposing students to informational texts could be done in different ways in classrooms. However, since read aloud appears to be such a common and efficient practice in preschool and primary grades, it is important to examine more deeply the factors taken into account by teachers when they are selecting their readings for this important teaching activity. Moreover, it seems critical to know why teachers are not inclined to choose more often informational texts when they are reading aloud to their pupils. A group of 22 preschool or primary grade teachers participated in this study. The data collection was done by a survey and an individual semi-structured interview. The survey was conducted in order to get quantitative data on the read-aloud practices of teachers. As for the interviews, they were organized around three categories of questions (exploratory, analytical, opinion) regarding the process of selecting the texts for the read-aloud sessions. A statistical analysis was conducted on the data obtained by the survey. As for the interviews, they were subjected to a content analysis aiming to classify the information collected in predetermined categories such as the reasons given to favor fictional texts over informative texts, the reasons given for avoiding informative texts for reading aloud, the perceptions of the challenges that the informative texts could bring when they are read aloud to students, and the perceived advantages that they would present if they were chosen more often for this activity. Results are showing variable factors that are guiding the teachers when they are making their selection of the texts to be read aloud. As for example, some of them are choosing solely fictional texts because of their convictions that these are more interesting for their students. They also perceive that the informational texts are not good choices because they are not suitable for pleasure reading. In that matter, results are pointing to some interesting elements. Many teachers perceive that read aloud of fictional or informational texts have different goals: fictional texts are read for pleasure and informational texts are read mostly for academic purposes. These results bring out the urgency for teachers to become aware of the numerous benefits that the reading aloud of each type of texts could bring to their students, especially the informational texts. The possible consequences of teachers’ perceptions will be discussed further in our presentation.

Keywords: fictional texts, informational texts, preschool or primary grade teachers, reading aloud

Procedia PDF Downloads 122
302 Effects of Global Validity of Predictive Cues upon L2 Discourse Comprehension: Evidence from Self-paced Reading

Authors: Binger Lu

Abstract:

It remains unclear whether second language (L2) speakers could use discourse context cues to predict upcoming information as native speakers do during online comprehension. Some researchers propose that L2 learners may have a reduced ability to generate predictions during discourse processing. At the same time, there is evidence that discourse-level cues are weighed more heavily in L2 processing than in L1. Previous studies showed that L1 prediction is sensitive to the global validity of predictive cues. The current study aims to explore whether and to what extent L2 learners can dynamically and strategically adjust their prediction in accord with the global validity of predictive cues in L2 discourse comprehension as native speakers do. In a self-paced reading experiment, Chinese native speakers (N=128), C-E bilinguals (N=128), and English native speakers (N=128) read high-predictable (e.g., Jimmy felt thirsty after running. He wanted to get some water from the refrigerator.) and low-predictable (e.g., Jimmy felt sick this morning. He wanted to get some water from the refrigerator.) discourses in two-sentence frames. The global validity of predictive cues was manipulated by varying the ratio of predictable (e.g., Bill stood at the door. He opened it with the key.) and unpredictable fillers (e.g., Bill stood at the door. He opened it with the card.), such that across conditions, the predictability of the final word of the fillers ranged from 100% to 0%. The dependent variable was reading time on the critical region (the target word and the following word), analyzed with linear mixed-effects models in R. C-E bilinguals showed reliable prediction across all validity conditions (β = -35.6 ms, SE = 7.74, t = -4.601, p< .001), and Chinese native speakers showed significant effect (β = -93.5 ms, SE = 7.82, t = -11.956, p< .001) in two of the four validity conditions (namely, the High-validity and MedLow conditions, where fillers ended with predictable words in 100% and 25% cases respectively), whereas English native speakers didn’t predict at all (β = -2.78 ms, SE = 7.60, t = -.365, p = .715). There was neither main effect (χ^²(3) = .256, p = .968) nor interaction (Predictability: Background: Validity, χ^²(3) = 1.229, p = .746; Predictability: Validity, χ^²(3) = 2.520, p = .472; Background: Validity, χ^²(3) = 1.281, p = .734) of Validity with speaker groups. The results suggest that prediction occurs in L2 discourse processing but to a much less extent in L1, witha significant effect in some conditions of L1 Chinese and anull effect in L1 English processing, consistent with the view that L2 speakers are more sensitive to discourse cues compared with L1 speakers. Additionally, the pattern of L1 and L2 predictive processing was not affected by the global validity of predictive cues. C-E bilinguals’ predictive processing could be partly transferred from their L1, as prior research showed that discourse information played a more significant role in L1 Chinese processing.

Keywords: bilingualism, discourse processing, global validity, prediction, self-paced reading

Procedia PDF Downloads 112
301 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data

Authors: M. Kharrat, G. Moreau, Z. Aboura

Abstract:

The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.

Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition

Procedia PDF Downloads 135