Search results for: high myopia
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20028

Search results for: high myopia

1008 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen

Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin

Abstract:

Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.

Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay

Procedia PDF Downloads 87
1007 Steel Concrete Composite Bridge: Modelling Approach and Analysis

Authors: Kaviyarasan D., Satish Kumar S. R.

Abstract:

India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.

Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge

Procedia PDF Downloads 183
1006 Human Dental Pulp Stem Cells Attenuate Streptozotocin-Induced Parotid Gland Injury in Rats

Authors: Gehan ElAkabawy

Abstract:

Background: Diabetes mellitus causes severe deteriorations of almost all the organs and systems of the body, as well as significant damage to the oral cavity. The oral changes are mainly related to salivary glands dysfunction characterized by hyposalivation and xerostomia, which significantly reduce diabetic patients’ quality of life. Human dental pulp stem cells represent a promising source for cell-based therapies, owing to their easy, minimally invasive surgical access, and high proliferative capacity. It was reported that the trophic support mediated by dental pulp stem cells can rescue the functional and structural alterations of damaged salivary glands. However, potential differentiation and paracrine effects of human dental pulp stem cells in diabetic-induced parotid gland damage have not been previously investigated. Our study aimed to investigate the therapeutic effects of intravenous transplantation of human dental pulp stem cells (hDPSCs) on parotid gland injury in a rat model of streptozotocin (STZ)-induced type 1 diabetes. Methods: Thirty Sprague-Dawley male rats were randomly categorised into three groups: control, diabetic (STZ), and transplanted (STZ+hDPSCs). hDPSCs or vehicle was injected into the tail vein 7 days after STZ injection. The fasting blood glucose levels were monitored weekly. A glucose tolerance test was performed, and the parotid gland weight, salivary flow rate, oxidative stress indices, parotid gland histology, and caspase-3, vascular endothelial growth factor (VEGF), and proliferating cell nuclear antigen (PCNA) expression in parotid tissues were assessed 28 days post-transplantation. Results: Transplantation of hDPSCs downregulated blood glucose, improved the salivary flow rate, and reduced oxidative stress. The cells migrated to, survived, and differentiated into acinar, ductal, and myoepithelial cells in the STZ-injured parotid gland. Moreover, they downregulated the expression of caspase-3 and upregulated the expression of VEGF and PCNA, likely exerting pro-angiogenetic and antiapoptotic effects and promoting endogenous regeneration. In addition, the transplanted cells enhanced the parotid nitric oxide (NO) -tetrahydrobiopterin (BH4) pathway. Conclusions: Our results show that hDPSCs can migrate to and survive within the STZ-injured parotid gland, where they prevent its functional and morphological damage by restoring normal glucose levels, differentiating into parotid cell populations, and stimulating paracrine-mediated regeneration. Thus, hDPSCs may have therapeutic potential in the treatment of diabetes-induced parotid gland injury.

Keywords: dental pulp stem cells, diabetes, streptozotocin, parotid gland

Procedia PDF Downloads 194
1005 Influence of Torrefied Biomass on Co-Combustion Behaviors of Biomass/Lignite Blends

Authors: Aysen Caliskan, Hanzade Haykiri-Acma, Serdar Yaman

Abstract:

Co-firing of coal and biomass blends is an effective method to reduce carbon dioxide emissions released by burning coals, thanks to the carbon-neutral nature of biomass. Besides, usage of biomass that is renewable and sustainable energy resource mitigates the dependency on fossil fuels for power generation. However, most of the biomass species has negative aspects such as low calorific value, high moisture and volatile matter contents compared to coal. Torrefaction is a promising technique in order to upgrade the fuel properties of biomass through thermal treatment. That is, this technique improves the calorific value of biomass along with serious reductions in the moisture and volatile matter contents. In this context, several woody biomass materials including Rhododendron, hybrid poplar, and ash-tree were subjected to torrefaction process in a horizontal tube furnace at 200°C under nitrogen flow. In this way, the solid residue obtained from torrefaction that is also called as 'biochar' was obtained and analyzed to monitor the variations taking place in biomass properties. On the other hand, some Turkish lignites from Elbistan, Adıyaman-Gölbaşı and Çorum-Dodurga deposits were chosen as coal samples since these lignites are of great importance in lignite-fired power stations in Turkey. These lignites were blended with the obtained biochars for which the blending ratio of biochars was kept at 10 wt% and the lignites were the dominant constituents in the fuel blends. Burning tests of the lignites, biomasses, biochars, and blends were performed using a thermogravimetric analyzer up to 900°C with a heating rate of 40°C/min under dry air atmosphere. Based on these burning tests, properties relevant to burning characteristics such as the burning reactivity and burnout yields etc. could be compared to justify the effects of torrefaction and blending. Besides, some characterization techniques including X-Ray Diffraction (XRD), Fourier Transform Infrared (FTIR) spectroscopy and Scanning Electron Microscopy (SEM) were also conducted for the untreated biomass and torrefied biomass (biochar) samples, lignites and their blends to examine the co-combustion characteristics elaborately. Results of this study revealed the fact that blending of lignite with 10 wt% biochar created synergistic behaviors during co-combustion in comparison to the individual burning of the ingredient fuels in the blends. Burnout and ignition performances of each blend were compared by taking into account the lignite and biomass structures and characteristics. The blend that has the best co-combustion profile and ignition properties was selected. Even though final burnouts of the lignites were decreased due to the addition of biomass, co-combustion process acts as a reasonable and sustainable solution due to its environmentally friendly benefits such as reductions in net carbon dioxide (CO2), SOx and hazardous organic chemicals derived from volatiles.

Keywords: burnout performance, co-combustion, thermal analysis, torrefaction pretreatment

Procedia PDF Downloads 338
1004 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 146
1003 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 165
1002 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language

Authors: Ghazal Faraj, András Micsik

Abstract:

Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.

Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping

Procedia PDF Downloads 251
1001 Survey of Prevalence of Noise Induced Hearing Loss in Hawkers and Shopkeepers in Noisy Areas of Mumbai City

Authors: Hitesh Kshayap, Shantanu Arya, Ajay Basod, Sachin Sakhuja

Abstract:

This study was undertaken to measure the overall noise levels in different locations/zones and to estimate the prevalence of Noise induced hearing loss in Hawkers & Shopkeepers in Mumbai, India. The Hearing Test developed by American Academy Of Otolaryngology, translated from English to Hindi, and validated is used as a screening tool for hearing sensitivity was employed. The tool is having 14 items. Each item is scored on a scale 0, 1, 2 and 3. The score 6 and above indicated some difficulty or definite difficulty in hearing in daily activities and low score indicated lesser difficulty or normal hearing. The subjects who scored 6 or above or having tinnitus were made to undergo hearing evaluation by Pure tone audiometer. Further, the environmental noise levels were measured from Morning to Evening at road side at different Location/Hawking zones in Mumbai city using SLM9 Agronic 8928B & K type Digital Sound Level Meter) in dB (A). The maximum noise level of 100.0 dB (A) was recorded during evening hours from Chattrapati Shivaji Terminal to Colaba with overall noise level of 79.0 dB (A). However, the minimum noise level in this area was 72.6 dB (A) at any given point of time. Further, 54.6 dB (A) was recorded as minimum noise level during 8-9 am at Sion Circle. Further, commencement of flyovers with 2-tier traffic, sky walks, increasing number of vehicular traffic at road, high rise buildings and other commercial & urbanization activities in the Mumbai city most probably have resulted in increasing the overall environmental noise levels. Trees which acted as noise absorbers have been cut owing to rapid construction. The study involved 100 participants in the age range of 18 to 40 years of age, with the mean age of 29 years (S.D. =6.49). 46 participants having tinnitus or have obtained the score of 6 were made to undergo Pure Tone Audiometry and it was found that the prevalence rate of hearing loss in hawkers & shopkeepers is 19% (10% Hawkers and 9 % Shopkeepers). The results found indicates that 29 (42.6%) out of 64 Hawkers and 17 (47.2%) out of 36 Shopkeepers who underwent PTA had no significant difference in percentage of Noise Induced Hearing loss. The study results also reveal that participants who exhibited tinnitus 19 (41.30%) out of 46 were having mild to moderate sensorineural hearing loss between 3000Hz to 6000Hz. The Pure tone Audiogram pattern revealed Hearing loss at 4000 Hz and 6000 Hz while hearing at adjacent frequencies were nearly normal. 7 hawkers and 8 shopkeepers had mild notch while 3 hawkers and 1 shopkeeper had a moderate degree of notch. It is thus inferred that tinnitus is a strong indicator for presence of hearing loss and 4/6 KHz notch is a strong marker for road/traffic/ environmental noise as an occupational hazard for hawkers and shopkeepers. Mass awareness about these occupational hazards, regular hearing check up, early intervention along with sustainable development juxtaposed with social and urban forestry can help in this regard.

Keywords: NIHL, noise, sound level meter, tinnitus

Procedia PDF Downloads 198
1000 The Expression of the Social Experience in Film Narration: Cinematic ‘Free Indirect Discourse’ in the Dancing Hawk (1977) by Grzegorz Krolikiewicz

Authors: Robert Birkholc

Abstract:

One of the basic issues related to the creation of characters in media, such as literature and film, is the representation of the characters' thoughts, emotions, and perceptions. This paper is devoted to the social perspective (or the focalization) expressed in film narration. The aim of the paper is to show how social point of view of the hero –conditioned by his origin and the environment from which he comes– can be created by using non-verbal, purely audiovisual means of expression. The issue will be considered on the example of the little-known polish movie The Dancing Hawk (1977) by Grzegorz Królikiewicz, based on the novel by Julian Kawalec. The thesis of the paper is that the polish director uses a narrative figure, which is somewhat analogous to literary form of free indirect discourse. In literature, free indirect discourse is formally ‘spoken’ by the external narrator, but the narration is clearly filtered through the language and thoughts of the character. According to some scholars (such as Roy Pascal), the narrator in this form of speech does not cite the character's words, but uses his way of thinking and imitates his perspective – sometimes with a deep irony. Free indirect discourse is frequently used in Julian Kawalec’s novel. Through the linguistic stylization, the author tries to convey the socially determined perspective of a peasant who migrates to the big city after the Second World War. Grzegorz Królikiewicz expresses the same social experience by pure cinematic form in the adaptation of the book. Both Kawalec and Królikiewicz show the consequences of so-called ‘social advancement’ in Poland after 1945, when the communist party took over political power. On the example of the fate of the main character, Michał Toporny, the director presents the experience of peasants who left their villages and had to adapt to new, urban space. However, the paper is not focused on the historical topic itself, but on the audiovisual form of the movie. Although Królikiewicz doesn’t use frequently POV shots, the narration of The Dancing Hawk is filtered through the sensations of the main character, who feels uprooted and alienated in the new social space. The director captures the hero's feelings through very complex audiovisual procedures – high or low points of view (representing the ‘social position’), grotesque soundtrack, expressionist scenery, and associative editing. In this way, he manages to create the world from the perspective of a socially maladjusted and internally split subject. The Dancing Hawk is a successful attempt to adapt the subjective narration of the book to the ‘language’ of the cinema. Mieke Bal’s notion of focalization helps to describe ‘free indirect discourse’ as a transmedial figure of representing of the characters’ perceptions. However, the polysemiotic medium of the film also significantly transforms this figure of representation. The paper shows both the similarities and differences between literary and cinematic ‘free indirect discourse.’

Keywords: film and literature, free indirect discourse, social experience, subjective narration

Procedia PDF Downloads 130
999 Modification of a Commercial Ultrafiltration Membrane by Electrospray Deposition for Performance Adjustment

Authors: Elizaveta Korzhova, Sebastien Deon, Patrick Fievet, Dmitry Lopatin, Oleg Baranov

Abstract:

Filtration with nanoporous ultrafiltration membranes is an attractive option to remove ionic pollutants from contaminated effluents. Unfortunately, commercial membranes are not necessarily suitable for specific applications, and their modification by polymer deposition is a fruitful way to adapt their performances accordingly. Many methods are usually used for surface modification, but a novel technique based on electrospray is proposed here. Various quantities of polymers were deposited on a commercial membrane, and the impact of the deposit is investigated on filtration performances and discussed in terms of charge and hydrophobicity. The electrospray deposition is a technique which has not been used for membrane modification up to now. It consists of spraying small drops of polymer solution under a high voltage between the needle containing the solution and the metallic support on which membrane is stuck. The advantage of this process lies in the small quantities of polymer that can be coated on the membrane surface compared with immersion technique. In this study, various quantities (from 2 to 40 μL/cm²) of solutions containing two charged polymers (13 mmol/L of monomer unit), namely polyethyleneimine (PEI) and polystyrene sulfonate (PSS), were sprayed on a negatively charged polyethersulfone membrane (PLEIADE, Orelis Environment). The efficacy of the polymer deposition was then investigated by estimating ion rejection, permeation flux, zeta-potential and contact angle before and after the polymer deposition. Firstly, contact angle (θ) measurements show that the surface hydrophilicity is notably improved by coating both PEI and PSS. Moreover, it was highlighted that the contact angle decreases monotonously with the amount of sprayed solution. Additionally, hydrophilicity enhancement was proved to be better with PSS (from 62 to 35°) than PEI (from 62 to 53°). Values of zeta-potential (ζ were estimated by measuring the streaming current generated by a pressure difference on both sides of a channel made by clamping two membranes. The ζ-values demonstrate that the deposits of PSS (negative at pH=5.5) allow an increase of the negative membrane charge, whereas the deposits of PEI (positive) lead to a positive surface charge. Zeta-potentials measurements also emphasize that the sprayed quantity has little impact on the membrane charge, except for very low quantities (2 μL/m²). The cross-flow filtration of salt solutions containing mono and divalent ions demonstrate that polymer deposition allows a strong enhancement of ion rejection. For instance, it is shown that rejection of a salt containing a divalent cation can be increased from 1 to 20 % and even to 35% by deposing 2 and 4 μL/cm² of PEI solution, respectively. This observation is coherent with the reversal of the membrane charge induced by PEI deposition. Similarly, the increase of negative charge induced by PSS deposition leads to an increase of NaCl rejection from 5 to 45 % due to electrostatic repulsion of the Cl- ion by the negative surface charge. Finally, a notable fall in the permeation flux due to the polymer layer coated at the surface was observed and the best polymer concentration in the sprayed solution remains to be determined to optimize performances.

Keywords: ultrafiltration, electrospray deposition, ion rejection, permeation flux, zeta-potential, hydrophobicity

Procedia PDF Downloads 186
998 The Impact of Gestational Weight Gain on Subclinical Atherosclerosis, Placental Circulation and Neonatal Complications

Authors: Marina Shargorodsky

Abstract:

Aim: Gestational weight gain (GWG) has been related to altering future weight-gain curves and increased risks of obesity later in life. Obesity may contribute to vascular atherosclerotic changes as well as excess cardiovascular morbidity and mortality observed in these patients. Noninvasive arterial testing, such as ultrasonographic measurement of carotid IMT, is considered a surrogate for systemic atherosclerotic disease burden and is predictive of cardiovascular events in asymptomatic individuals as well as recurrent events in patients with known cardiovascular disease. Currently, there is no consistent evidence regarding the vascular impact of excessive GWG. The present study was designed to investigate the impact of GWG on early atherosclerotic changes during late pregnancy, using intima-media thickness, as well as placental vascular circulation and inflammatory lesions and pregnancy outcomes. Methods: The study group consisted of 59 pregnant women who gave birth and underwent a placental histopathological examination at the Department of Obstetrics and Gynecology, Edith Wolfson Medical Center, Israel, in 2019. According to the IOM guidelines the study group has been divided into two groups: Group 1 included 32 women with pregnancy weight gain within recommended range; Group 2 included 27 women with excessive weight gain during pregnancy. The IMT was measured from non-diseased intimal and medial wall layers of the carotid artery on both sides, visualized by high-resolution 7.5 MHz ultrasound (Apogee CX Color, ATL). Placental histology subdivided placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion according to the criteria of the Society for Pediatric Pathology, subdividing placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion, as well as the inflammatory response of maternal and fetal origin. Results: IMT levels differed between groups and were significantly higher in Group 1 compared to Group 2 (0.7+/-0.1 vs 0.6+/-0/1, p=0.028). Multiple linear regression analysis of IMT included variables based on their associations in univariate analyses with a backward approach. Included in the model were pre-gestational BMI, HDL cholesterol and fasting glucose. The model was significant (p=0.001) and correctly classified 64.7% of study patients. In this model, pre-pregnancy BMI remained a significant independent predictor of subclinical atherosclerosis assessed by IMT (OR 4.314, 95% CI 0.0599-0.674, p=0.044). Among placental lesions related to fetal vascular malperfusion, villous changes consistent with fetal thrombo-occlusive disease (FTOD) were significantly higher in Group 1 than in Group 2, p=0.034). In Conclusion, the present study demonstrated that excessive weight gain during pregnancy is associated with an adverse effect on early stages of subclinical atherosclerosis, placental vascular circulation and neonatal complications. The precise mechanism for these vascular changes, as well as the overall clinical impact of weight control during pregnancy on IMT, placental vascular circulation as well as pregnancy outcomes, deserves further investigation.

Keywords: obesity, pregnancy, complications, weight gain

Procedia PDF Downloads 51
997 Variability Studies of Seyfert Galaxies Using Sloan Digital Sky Survey and Wide-Field Infrared Survey Explorer Observations

Authors: Ayesha Anjum, Arbaz Basha

Abstract:

Active Galactic Nuclei (AGN) are the actively accreting centers of the galaxies that host supermassive black holes. AGN emits radiation in all wavelengths and also shows variability across all the wavelength bands. The analysis of flux variability tells us about the morphology of the site of emission radiation. Some of the major classifications of AGN are (a) Blazars, with featureless spectra. They are subclassified as BLLacertae objects, Flat Spectrum Radio Quasars (FSRQs), and others; (b) Seyferts with prominent emission line features are classified into Broad Line, Narrow Line Seyferts of Type 1 and Type 2 (c) quasars, and other types. Sloan Digital Sky Survey (SDSS) is an optical telescope based in Mexico that has observed and classified billions of objects based on automated photometric and spectroscopic methods. A sample of blazars is obtained from the third Fermi catalog. For variability analysis, we searched for light curves for these objects in Wide-Field Infrared Survey Explorer (WISE) and Near Earth Orbit WISE (NEOWISE) in two bands: W1 (3.4 microns) and W2 (4.6 microns), reducing the final sample to 256 objects. These objects are also classified into 155 BLLacs, 99 FSRQs, and 2 Narrow Line Seyferts, namely, PMNJ0948+0022 and PKS1502+036. Mid-infrared variability studies of these objects would be a contribution to the literature. With this as motivation, the present work is focused on studying a final sample of 256 objects in general and the Seyferts in particular. Owing to the fact that the classification is automated, SDSS has miclassified these objects into quasars, galaxies, and stars. Reasons for the misclassification are explained in this work. The variability analysis of these objects is done using the method of flux amplitude variability and excess variance. The sample consists of observations in both W1 and W2 bands. PMN J0948+0022 is observed between MJD from 57154.79 to 58810.57. PKS 1502+036 is observed between MJD from 57232.42 to 58517.11, which amounts to a period of over six years. The data is divided into different epochs spanning not more than 1.2 days. In all the epochs, the sources are found to be variable in both W1 and W2 bands. This confirms that the object is variable in mid-infrared wavebands in both long and short timescales. Also, the sources are observed for color variability. Objects either show a bluer when brighter trend (BWB) or a redder when brighter trend (RWB). The possible claim for the object to be BWB (present objects) is that the longer wavelength radiation emitted by the source can be suppressed by the high-energy radiation from the central source. Another result is that the smallest radius of the emission source is one day since the epoch span used in this work is one day. The mass of the black holes at the centers of these sources is found to be less than or equal to 108 solar masses, respectively.

Keywords: active galaxies, variability, Seyfert galaxies, SDSS, WISE

Procedia PDF Downloads 128
996 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology

Authors: Mahdi Farajzadeh Ajirlou

Abstract:

Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.

Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter

Procedia PDF Downloads 37
995 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 106
994 An Evidence-Based Laboratory Medicine (EBLM) Test to Help Doctors in the Assessment of the Pancreatic Endocrine Function

Authors: Sergio J. Calleja, Adria Roca, José D. Santotoribio

Abstract:

Pancreatic endocrine diseases include pathologies like insulin resistance (IR), prediabetes, and type 2 diabetes mellitus (DM2). Some of them are highly prevalent in the U.S.—40% of U.S. adults have IR, 38% of U.S. adults have prediabetes, and 12% of U.S. adults have DM2—, as reported by the National Center for Biotechnology Information (NCBI). Building upon this imperative, the objective of the present study was to develop a non-invasive test for the assessment of the patient’s pancreatic endocrine function and to evaluate its accuracy in detecting various pancreatic endocrine diseases, such as IR, prediabetes, and DM2. This approach to a routine blood and urine test is based around serum and urine biomarkers. It is made by the combination of several independent public algorithms, such as the Adult Treatment Panel III (ATP-III), triglycerides and glucose (TyG) index, homeostasis model assessment-insulin resistance (HOMA-IR), HOMA-2, and the quantitative insulin-sensitivity check index (QUICKI). Additionally, it incorporates essential measurements such as the creatinine clearance, estimated glomerular filtration rate (eGFR), urine albumin-to-creatinine ratio (ACR), and urinalysis, which are helpful to achieve a full image of the patient’s pancreatic endocrine disease. To evaluate the estimated accuracy of this test, an iterative process was performed by a machine learning (ML) algorithm, with a training set of 9,391 patients. The sensitivity achieved was 97.98% and the specificity was 99.13%. Consequently, the area under the receiver operating characteristic (AUROC) curve, the positive predictive value (PPV), and the negative predictive value (NPV) were 92.48%, 99.12%, and 98.00%, respectively. The algorithm was validated with a randomized controlled trial (RCT) with a target sample size (n) of 314 patients. However, 50 patients were initially excluded from the study, because they had ongoing clinically diagnosed pathologies, symptoms or signs, so the n dropped to 264 patients. Then, 110 patients were excluded because they didn’t show up at the clinical facility for any of the follow-up visits—this is a critical point to improve for the upcoming RCT, since the cost of each patient is very high and for this RCT almost a third of the patients already tested were lost—, so the new n consisted of 154 patients. After that, 2 patients were excluded, because some of their laboratory parameters and/or clinical information were wrong or incorrect. Thus, a final n of 152 patients was achieved. In this validation set, the results obtained were: 100.00% sensitivity, 100.00% specificity, 100.00% AUROC, 100.00% PPV, and 100.00% NPV. These results suggest that this approach to a routine blood and urine test holds promise in providing timely and accurate diagnoses of pancreatic endocrine diseases, particularly among individuals aged 40 and above. Given the current epidemiological state of these type of diseases, these findings underscore the significance of early detection. Furthermore, they advocate for further exploration, prompting the intention to conduct a clinical trial involving 26,000 participants (from March 2025 to December 2026).

Keywords: algorithm, diabetes, laboratory medicine, non-invasive

Procedia PDF Downloads 32
993 The Hidden Mechanism beyond Ginger (Zingiber officinale Rosc.) Potent in vivo and in vitro Anti-Inflammatory Activity

Authors: Shahira M. Ezzat, Marwa I. Ezzat, Mona M. Okba, Esther T. Menze, Ashraf B. Abdel-Naim, Shahnas O. Mohamed

Abstract:

Background: In order to decrease the burden of the high cost of synthetic drugs, it is important to focus on phytopharmaceuticals. The aim of our study was to search for the mechanism of ginger (Zingiber officinale Roscoe) anti-inflammatory potential and to correlate it to its biophytochemicals. Methods: Various extracts viz. water, 50%, 70%, 80%, and 90% ethanol were prepared from ginger rhizomes. Fractionation of the aqueous extract (AE) was accomplished using Diaion HP-20. In vitro anti-inflammatory activity of the different extracts and isolated compounds was evaluated by protein denaturation inhibition, membrane stabilization, protease inhibition, and anti-lipoxygenase assays. In vivo anti-inflammatory activity of AE was estimated by assessment of rat paw oedema after carrageenan injection. Prostaglandin E2 (PGE2), certain inflammation markers (TNF-α, IL-6, IL-1α, IL-1β, INFr, MCP-1MIP, RANTES, and Nox) levels and MPO activity in the paw edema exudates were measured. Total antioxidant capacity (TAC) was also determined. Histopathological alterations of paw tissues were scored. Results: All the tested extracts showed significant (p < 0.1) anti-inflammatory activities. The highest percentage of heat induced albumin denaturation (66%) was exhibited by the 50% ethanol (250 μg/ml). The 70 and 90% ethanol extracts (500 μg/ml) were more potent as membrane stabilizers (34.5 and 37%, respectively) than diclofenac (33%). The 80 and 90% ethanol extracts (500 μg/ml) showed maximum protease inhibition (56%). The strongest anti-lipoxygenase activity was observed for the AE. It showed more significant lipoxygenase inhibition activity than that of diclofenac (58% and 52%, respectively) at the same concentration (125 μg/ml). Fractionation of AE yielded four main fractions (Fr I-IV) which showed significant in vitro anti-inflammatory. Purification of Fr-III and IV led to the isolation of 6-poradol (G1), 6-shogaol (G2); methyl 6- gingerol (G3), 5-gingerol (G4), 6-gingerol (G5), 8-gingerol (G6), 10-gingerol (G7), and 1-dehydro-6-gingerol (G8). G2 (62.5 ug/ml), G1 (250 ug/ml), and G8 (250 ug/ml) exhibited potent anti-inflammatory activity in all studied assays, while G4 and G5 exhibited moderate activity. In vivo administration of AE ameliorated rat paw oedema in a dose-dependent manner. AE (at 200 mg/kg) showed significant reduction (60%) of PGE2 production. The AE at different doses (at 25-200 mg/kg) showed significant reduction in inflammatory markers except for IL-1α. AE (at 25 mg/kg) is superior to indomethacin in reduction of IL-1β. Treatment of animals with the AE (100, 200 mg/kg) or indomethacin (10 mg/kg) showed significant reduction in TNF-α, IL-6, MCP-1, and RANTES levels, and MPO activity by about (31, 57 and 32% ) (65, 60 and 57%) (27, 41 and 28%) (23, 32 and 23%) (66, 67 and 67%) respectively. AE at 100 and 200 mg/kg was equipotent to indomethacin in reduction of NOₓ level and in increasing the TAC. Histopathological examination revealed very few inflammatory cells infiltration and oedema after administration of AE (200 mg/kg) prior to carrageenan. Conclusion: Ginger anti-inflammatory activity is mediated by inhibiting macrophage and neutrophils activation as well as negatively affecting monocyte and leukocyte migration. Moreover, it produced dose-dependent decrease in pro-inflammatory cytokines and chemokines and replenished the total antioxidant capacity. We strongly recommend future investigations of ginger in the potential signal transduction pathways.

Keywords: anti-lipoxygenase activity, inflammatory markers, 1-dehydro-6-gingerol, 6-shogaol

Procedia PDF Downloads 250
992 Effect of the Polymer Modification on the Cytocompatibility of Human and Rat Cells

Authors: N. Slepickova Kasalkova, P. Slepicka, L. Bacakova, V. Svorcik

Abstract:

Tissue engineering includes combination of materials and techniques used for the improvement, repair or replacement of the tissue. Scaffolds, permanent or temporally material, are used as support for the creation of the "new cell structures". For this important component (scaffold), a variety of materials can be used. The advantage of some polymeric materials is their cytocompatibility and possibility of biodegradation. Poly(L-lactic acid) (PLLA) is a biodegradable,  semi-crystalline thermoplastic polymer. PLLA can be fully degraded into H2O and CO2. In this experiment, the effect of the surface modification of biodegradable polymer (performed by plasma treatment) on the various cell types was studied. The surface parameters and changes of the physicochemical properties of modified PLLA substrates were studied by different methods. Surface wettability was determined by goniometry, surface morphology and roughness study were performed with atomic force microscopy and chemical composition was determined using photoelectron spectroscopy. The physicochemical properties were studied in relation to cytocompatibility of human osteoblast (MG 63 cells), rat vascular smooth muscle cells (VSMC), and human stem cells (ASC) of the adipose tissue in vitro. A fluorescence microscopy was chosen to study and compare cell-material interaction. Important parameters of the cytocompatibility like adhesion, proliferation, viability, shape, spreading of the cells were evaluated. It was found that the modification leads to the change of the surface wettability depending on the time of modification. Short time of exposition (10-120 s) can reduce the wettability of the aged samples, exposition longer than 150 s causes to increase of contact angle of the aged PLLA. The surface morphology is significantly influenced by duration of modification, too. The plasma treatment involves the formation of the crystallites, whose number increases with increasing time of modification. On the basis of physicochemical properties evaluation, the cells were cultivated on the selected samples. Cell-material interactions are strongly affected by material chemical structure and surface morphology. It was proved that the plasma treatment of PLLA has a positive effect on the adhesion, spreading, homogeneity of distribution and viability of all cultivated cells. This effect was even more apparent for the VSMCs and ASCs which homogeneously covered almost the whole surface of the substrate after 7 days of cultivation. The viability of these cells was high (more than 98% for VSMCs, 89-96% for ASCs). This experiment is one part of the basic research, which aims to easily create scaffolds for tissue engineering with subsequent use of stem cells and their subsequent "reorientation" towards the bone cells or smooth muscle cells.

Keywords: poly(L-lactic acid), plasma treatment, surface characterization, cytocompatibility, human osteoblast, rat vascular smooth muscle cells, human stem cells

Procedia PDF Downloads 227
991 Spatial Pattern of Farm Mechanization: A Micro Level Study of Western Trans-Ghaghara Plain, India

Authors: Zafar Tabrez, Nizamuddin Khan

Abstract:

Agriculture in India in the pre-green revolution period was mostly controlled by terrain, climate and edaphic factors. But after the introduction of innovative factors and technological inputs, green revolution occurred and agricultural scene witnessed great change. In the development of India’s agriculture, speedy, and extensive introduction of technological change is one of the crucial factors. The technological change consists of adoption of farming techniques such as use of fertilisers, pesticides and fungicides, improved variety of seeds, modern agricultural implements, improved irrigation facilities, contour bunding for the conservation of moisture and soil, which are developed through research and calculated to bring about diversification and increase of production and greater economic return to the farmers. The green revolution in India took place during late 60s, equipped with technological inputs like high yielding varieties seeds, assured irrigation as well as modern machines and implements. Initially the revolution started in Punjab, Haryana and western Uttar Pradesh. With the efforts of government, agricultural planners, as well as policy makers, the modern technocratic agricultural development scheme was also implemented and introduced in backward and marginal regions of the country later on. Agriculture sector occupies the centre stage of India’s social security and overall economic welfare. The country has attained self-sufficiency in food grain production and also has sufficient buffer stock. Our first Prime Minister, Jawaharlal Nehru said ‘everything else can wait but not agriculture’. There is still a continuous change in the technological inputs and cropping patterns. Keeping these points in view, author attempts to investigate extensively the mechanization of agriculture and the change by selecting western Trans-Ghaghara plain as a case study and block a unit of the study. It includes the districts of Gonda, Balrampur, Bahraich and Shravasti which incorporate 44 blocks. It is based on secondary sources of data by blocks for the year 1997 and 2007. It may be observed that there is a wide range of variations and the change in farm mechanization, i.e., agricultural machineries such as ploughs, wooden and iron, advanced harrow and cultivator, advanced thrasher machine, sprayers, advanced sowing instrument, and tractors etc. It may be further noted that due to continuous decline in size of land holdings and outflux of people for the same nature of works or to be employed in non-agricultural sectors, the magnitude and direction of agricultural systems are affected in the study area which is one of the marginalized regions of Uttar Pradesh, India.

Keywords: agriculture, technological inputs, farm mechanization, food production, cropping pattern

Procedia PDF Downloads 311
990 The Impact of the Covid-19 Crisis on the Information Behavior in the B2B Buying Process

Authors: Stehr Melanie

Abstract:

The availability of apposite information is essential for the decision-making process of organizational buyers. Due to the constraints of the Covid-19 crisis, information channels that emphasize face-to-face contact (e.g. sales visits, trade shows) have been unavailable, and usage of digitally-driven information channels (e.g. videoconferencing, platforms) has skyrocketed. This paper explores the question in which areas the pandemic induced shift in the use of information channels could be sustainable and in which areas it is a temporary phenomenon. While information and buying behavior in B2C purchases has been regularly studied in the last decade, the last fundamental model of organizational buying behavior in B2B was introduced by Johnston and Lewin (1996) in times before the advent of the internet. Subsequently, research efforts in B2B marketing shifted from organizational buyers and their decision and information behavior to the business relationships between sellers and buyers. This study builds on the extensive literature on situational factors influencing organizational buying and information behavior and uses the economics of information theory as a theoretical framework. The research focuses on the German woodworking industry, which before the Covid-19 crisis was characterized by a rather low level of digitization of information channels. By focusing on an industry with traditional communication structures, a shift in information behavior induced by an exogenous shock is considered a ripe research setting. The study is exploratory in nature. The primary data source is 40 in-depth interviews based on the repertory-grid method. Thus, 120 typical buying situations in the woodworking industry and the information and channels relevant to them are identified. The results are combined into clusters, each of which shows similar information behavior in the procurement process. In the next step, the clusters are analyzed in terms of the post and pre-Covid-19 crisis’ behavior identifying stable and dynamic information behavior aspects. Initial results show that, for example, clusters representing search goods with low risk and complexity suggest a sustainable rise in the use of digitally-driven information channels. However, in clusters containing trust goods with high significance and novelty, an increased return to face-to-face information channels can be expected after the Covid-19 crisis. The results are interesting from both a scientific and a practical point of view. This study is one of the first to apply the economics of information theory to organizational buyers and their decision and information behavior in the digital information age. Especially the focus on the dynamic aspects of information behavior after an exogenous shock might contribute new impulses to theoretical debates related to the economics of information theory. For practitioners - especially suppliers’ marketing managers and intermediaries such as publishers or trade show organizers from the woodworking industry - the study shows wide-ranging starting points for a future-oriented segmentation of their marketing program by highlighting the dynamic and stable preferences of elaborated clusters in the choice of their information channels.

Keywords: B2B buying process, crisis, economics of information theory, information channel

Procedia PDF Downloads 183
989 Exploring the Energy Saving Benefits of Solar Power and Hot Water Systems: A Case Study of a Hospital in Central Taiwan

Authors: Ming-Chan Chung, Wen-Ming Huang, Yi-Chu Liu, Li-Hui Yang, Ming-Jyh Chen

Abstract:

introduction: Hospital buildings require considerable energy, including air conditioning, lighting, elevators, heating, and medical equipment. Energy consumption in hospitals is expected to increase significantly due to innovative equipment and continuous development plans. Consequently, the environment and climate will be adversely affected. Hospitals should therefore consider transforming from their traditional role of saving lives to being at the forefront of global efforts to reduce carbon dioxide emissions. As healthcare providers, it is our responsibility to provide a high-quality environment while using as little energy as possible. Purpose / Methods: Compare the energy-saving benefits of solar photovoltaic systems and solar hot water systems. The proportion of electricity consumption effectively reduced after the installation of solar photovoltaic systems. To comprehensively assess the potential benefits of utilizing solar energy for both photovoltaic (PV) and solar thermal applications in hospitals, a solar PV system was installed covering a total area of 28.95 square meters in 2021. Approval was obtained from the Taiwan Power Company to integrate the system into the hospital's electrical infrastructure for self-use. To measure the performance of the system, a dedicated meter was installed to track monthly power generation, which was then converted into area output using an electric energy conversion factor. This research aims to compare the energy efficiency of solar PV systems and solar thermal systems. Results: Using the conversion formula between electrical and thermal energy, we can compare the energy output of solar heating systems and solar photovoltaic systems. The comparative study draws upon data from February 2021 to February 2023, wherein the solar heating system generated an average of 2.54 kWh of energy per panel per day, while the solar photovoltaic system produced 1.17 kWh of energy per panel per day, resulting in a difference of approximately 2.17 times between the two systems. Conclusions: After conducting statistical analysis and comparisons, it was found that solar thermal heating systems offer higher energy and greater benefits than solar photovoltaic systems. Furthermore, an examination of literature data and simulations of the energy and economic benefits of solar thermal water systems and solar-assisted heat pump systems revealed that solar thermal water systems have higher energy density values, shorter recovery periods, and lower power consumption than solar-assisted heat pump systems. Through monitoring and empirical research in this study, it has been concluded that a heat pump-assisted solar thermal water system represents a relatively superior energy-saving and carbon-reducing solution for medical institutions. Not only can this system help reduce overall electricity consumption and the use of fossil fuels, but it can also provide more effective heating solutions.

Keywords: sustainable development, energy conservation, carbon reduction, renewable energy, heat pump system

Procedia PDF Downloads 80
988 Re-Development and Lost Industrial History: Darling Harbour of Sydney

Authors: Ece Kaya

Abstract:

Urban waterfront re-development is a well-established phenomenon internationally since 1960s. In cities throughout the world, old industrial waterfront land is being redeveloped into luxury housing, offices, tourist attractions, cultural amenities and shopping centres. These developments are intended to attract high-income residents, tourists and investors to the city. As urban waterfronts are iconic places for the cities and catalyst for further development. They are often referred as flagship project. In Sydney, the re-development of industrial waterfront has been exposed since 1980s with Darling Harbour Project. Darling Harbour waterfront used to be the main arrival and landing place for commercial and industrial shipping until 1970s. Its urban development has continued since the establishment of the city. It was developed as a major industrial and goods-handling precinct in 1812. This use was continued by the mid-1970s. After becoming a redundant industrial waterfront, the area was ripe for re-development in 1984. Darling Harbour is now one of the world’s fascinating waterfront leisure and entertainment destinations and its transformation has been considered as a success story. It is a contradictory statement for this paper. Data collection was carried out using an extensive archival document analysis. The data was obtained from Australian Institute of Architects, City of Sydney Council Archive, Parramatta Heritage Office, Historic Houses Trust, National Trust, and University of Sydney libraries, State Archive, State Library and Sydney Harbour Foreshore Authority Archives. Public documents, primarily newspaper articles and design plans, were analysed to identify possible differences in motives and to determine the process of implementation of the waterfront redevelopments. It was also important to obtain historical photographs and descriptions to understand how the waterfront had been altered. Sites maps in different time periods have been identified to understand what kind of changes happened on the urban landscape and how the developments affected areas. Newspaper articles and editorials have been examined in order to discover what aspects of the projects reflected the history and heritage. The thematic analysis of the archival data helped determine Darling Harbour is a historically important place as it had represented a focal point for Sydney's industrial growth and the cradle of industrial development in European Australia. It has been found that the development area was designated in order to be transformed to a place for tourist, education, recreational, entertainment, cultural and commercial activities and as a result little evidence remained of its industrial past. This paper aims to discuss the industrial significance of Darling Harbour and to explain the changes on its industrial landscape. What is absent now is the layer of its history that creates the layers of meaning to the place so its historic industrial identity is effectively lost.

Keywords: historical significance, industrial heritage, industrial waterfront, re-development

Procedia PDF Downloads 301
987 Role of Empirical Evidence in Law-Making: Case Study from India

Authors: Kaushiki Sanyal, Rajesh Chakrabarti

Abstract:

In India, on average, about 60 Bills are passed every year in both Houses of Parliament – Lok Sabha and Rajya Sabha (calculated from information on websites of both Houses). These are debated in both Lok Sabha (House of Commons) and Rajya Sabha (Council of States) before they are passed. However, lawmakers rarely use empirical evidence to make a case for a law. Most of the time, they support a law on the basis of anecdote, intuition, and common sense. While these do play a role in law-making, without the necessary empirical evidence, laws often fail to achieve their desired results. The quality of legislative debates is an indicator of the efficacy of the legislative process through which a Bill is enacted. However, the study of legislative debates has not received much attention either in India or worldwide due to the difficulty of objectively measuring the quality of a debate. Broadly, three approaches have emerged in the study of legislative debates. The rational-choice or formal approach shows that speeches vary based on different institutional arrangements, intra-party politics, and the political culture of a country. The discourse approach focuses on the underlying rules and conventions and how they impact the content of the debates. The deliberative approach posits that legislative speech can be reasoned, respectful, and informed. This paper aims to (a) develop a framework to judge the quality of debates by using the deliberative approach; (b) examine the legislative debates of three Bills passed in different periods as a demonstration of the framework, and (c) examine the broader structural issues that disincentive MPs from scrutinizing Bills. The framework would include qualitative and quantitative indicators to judge a debate. The idea is that the framework would provide useful insights into the legislators’ knowledge of the subject, the depth of their scrutiny of Bills, and their inclination toward evidence-based research. The three Bills that the paper plans to examine are as follows: 1. The Narcotics Drugs and Psychotropic Substances Act, 1985: This act was passed to curb drug trafficking and abuse. However, it mostly failed to fulfill its purpose. Consequently, it was amended thrice but without much impact on the ground. 2. The Criminal Laws (Amendment) Act, 2013: This act amended the Indian Penal Code to add a section on human trafficking. The purpose was to curb trafficking and penalise traffickers, pimps, and middlemen. However, the crime rate remains high while the conviction rate is low. 3. The Surrogacy (Regulation) Act, 2021: This act bans commercial surrogacy allowing only relatives to act as surrogates as long as there is no monetary payment. Experts fear that instead of preventing commercial surrogacy, it would drive the activity underground. The consequences would be borne by the surrogate, who would not be protected by law. The purpose of the paper is to objectively analyse the quality of parliamentary debates, get insights into how MPs understand the evidence and deliberate on steps to incentivise them to use empirical evidence.

Keywords: legislature, debates, empirical, India

Procedia PDF Downloads 85
986 Digital Technology Relevance in Archival and Digitising Practices in the Republic of South Africa

Authors: Tashinga Matindike

Abstract:

By means of definition, digital artworks encompass an array of artistic productions that are expressed in a technological form as an essential part of a creative process. Examples include illustrations, photos, videos, sculptures, and installations. Within the context of the visual arts, the process of repatriation involves the return of once-appropriated goods. Archiving denotes the preservation of a commodity for storage purposes in order to nurture its continuity. The aforementioned definitions form the foundation of the academic framework and premise of the argument, which is outlined in this paper. This paper aims to define, discuss and decipher the complexities involved in digitising artworks, whilst explaining the benefits of the process, particularly within the South African context, which is rich in tangible and intangible traditional cultural material, objects, and performances. With the internet having been introduced to the African Continent in the early 1990s, this new form of technology, in its own right, initiated a high degree of efficiency, which also resulted in the progressive transformation of computer-generated visual output. Subsequently, this caused a revolutionary influence on the manner in which technological software was developed and uterlised in art-making. Digital technology and the digitisation of creative processes then opened up new avenues of collating and recording information. One of the first visual artists to make use of digital technology software in his creative productions was United States-based artist John Whitney. His inventive work contributed greatly to the onset and development of digital animation. Comparable by technique and originality, South African contemporary visual artists who make digital artworks, both locally and internationally, include David Goldblatt, Katherine Bull, Fritha Langerman, David Masoga, Zinhle Sethebe, Alicia Mcfadzean, Ivan Van Der Walt, Siobhan Twomey, and Fhatuwani Mukheli. In conclusion, the main objective of this paper is to address the following questions: In which ways has the South African art community of visual artists made use of and benefited from technology, in its digital form, as a means to further advance creativity? What are the positive changes that have resulted in art production in South Africa since the onset and use of digital technological software? How has digitisation changed the manner in which we record, interpret, and archive both written and visual information? What is the role of South African art institutions in the development of digital technology and its use in the field of visual art. What role does digitisation play in the process of the repatriation of artworks and artefacts. The methodology in terms of the research process of this paper takes on a multifacted form, inclusive of data analysis of information attained by means of qualitative and quantitative approaches.

Keywords: digital art, digitisation, technology, archiving, transformation and repatriation

Procedia PDF Downloads 50
985 The Role of Intraluminal Endoscopy in the Diagnosis and Treatment of Fluid Collections in Patients With Acute Pancreatitis

Authors: A. Askerov, Y. Teterin, P. Yartcev, S. Novikov

Abstract:

Introduction: Acute pancreatitis (AP) is a socially significant problem for public health and continues to be one of the most common causes of hospitalization of patients with pathology of the gastrointestinal tract. It is characterized by high mortality rates, which reaches 62-65% in infected pancreatic necrosis. Aims & Methods: The study group included 63 patients who underwent transluminal drainage (TLD) fluid collection (FC). All patients were performed transabdominal ultrasound, computer tomography of the abdominal cavity and retroperitoneal organs and endoscopic ultrasound (EUS) of the pancreatobiliary zone. The EUS was used as a final diagnostic method to determine the characteristics of FC. The indications for TLD were: the distance between the wall of the hollow organ and the FC was not more than 1 cm, the absence of large vessels on the puncture trajectory (more than 3 mm), and the size of the formation was more than 5 cm. When a homogeneous cavity with clear, even contours was detected, a plastic stent with rounded ends (“double pig tail”) was installed. The indication for the installation of a fully covered self-expanding stent was the detection of nonhomogeneous anechoic FC with hyperechoic inclusions and cloudy purulent contents. In patients with necrotic forms after drainage of the purulent cavity, a cystonasal drainage with a diameter of 7Fr was installed in its lumen under X-ray control to sanitize the cavity with a 0.05% aqueous solution of chlorhexidine. Endoscopic necrectomy was performed every 24-48 hours. The plastic stent was removed in 6 month, the fully covered self-expanding stent - in 1 month after the patient was discharged from the hospital. Results: Endoscopic TLD was performed in 63 patients. The FC corresponding to interstitial edematous pancreatitis was detected in 39 (62%) patients who underwent TLD with the installation of a plastic stent with rounded ends. In 24 (38%) patients with necrotic forms of FC, a fully covered self-expanding stent was placed. Communication with the ductal system of the pancreas was found in 5 (7.9%) patients. They underwent pancreaticoduodenal stenting. A complicated postoperative period was noted in 4 (6.3%) cases and was manifested by bleeding from the zone of pancreatogenic destruction. In 2 (3.1%) cases, this required angiography and endovascular embolization a. gastroduodenalis, in 1 (1.6%) case, endoscopic hemostasis was performed by filling the cavity with 4 ml of Hemoblock hemostatic solution. The combination of both methods was used in 1 (1.6%) patient. There was no evidence of recurrent bleeding in these patients. Lethal outcome occurred in 4 patients (6.3%). In 3 (4.7%) patients, the cause of death was multiple organ failure, in 1 (1.6%) - severe nosocomial pneumonia that developed on the 32nd day after drainage. Conclusions: 1. EUS is not only the most important method for diagnosing FC in AP, but also allows you to determine further tactics for their intraluminal drainage.2. Endoscopic intraluminal drainage of fluid zones in 45.8% of cases is the final minimally invasive method of surgical treatment of large-focal pancreatic necrosis. Disclosure: Nothing to disclose.

Keywords: acute pancreatitis, fluid collection, endoscopy surgery, necrectomy, transluminal drainage

Procedia PDF Downloads 109
984 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 174
983 Concussion: Clinical and Vocational Outcomes from Sport Related Mild Traumatic Brain Injury

Authors: Jack Nash, Chris Simpson, Holly Hurn, Ronel Terblanche, Alan Mistlin

Abstract:

There is an increasing incidence of mild traumatic brain injury (mTBI) cases throughout sport and with this, a growing interest from governing bodies to ensure these are managed appropriately and player welfare is prioritised. The Berlin consensus statement on concussion in sport recommends a multidisciplinary approach when managing those patients who do not have full resolution of mTBI symptoms. There are as of yet no standardised guideline to follow in the treatment of complex cases mTBI in athletes. The aim of this project was to analyse the outcomes, both clinical and vocational, of all patients admitted to the mild Traumatic Brain Injury (mTBI) service at the UK’s Defence Military Rehabilitation Centre Headley Court between 1st June 2008 and 1st February 2017, as a result of a sport induced injury, and evaluate potential predictive indicators of outcome. Patients were identified from a database maintained by the mTBI service. Clinical and occupational outcomes were ascertained from medical and occupational employment records, recorded prospectively, at time of discharge from the mTBI service. Outcomes were graded based on the vocational independence scale (VIS) and clinical documentation at discharge. Predictive indicators including referral time, age at time of injury, previous mental health diagnosis and a financial claim in place at time of entry to service were assessed using logistic regression. 45 Patients were treated for sport-related mTBI during this time frame. Clinically 96% of patients had full resolution of their mTBI symptoms after input from the mTBI service. 51% of patients returned to work at their previous vocational level, 4% had ongoing mTBI symptoms, 22% had ongoing physical rehabilitation needs, 11% required mental health input and 11% required further vestibular rehabilitation. Neither age, time to referral, pre-existing mental health condition nor compensation seeking had a significant impact on either vocational or clinical outcome in this population. The vast majority of patients reviewed in the mTBI clinic had persistent symptoms which could not be managed in primary care. A consultant-led, multidisciplinary approach to the diagnosis and management of mTBI has resulted in excellent clinical outcomes in these complex cases. High levels of symptom resolution suggest that this referral and treatment pathway is successful and is a model which could be replicated in other organisations with consultant led input. Further understanding of both predictive and individual factors would allow clinicians to focus treatments on those who are most likely to develop long-term complications following mTBI. A consultant-led, multidisciplinary service ensures a large number of patients will have complete resolution of mTBI symptoms after sport-related mTBI. Further research is now required to ascertain the key predictive indicators of outcome following sport-related mTBI.

Keywords: brain injury, concussion, neurology, rehabilitation, sports injury

Procedia PDF Downloads 156
982 Estimating Industrial Pollution Load in Phnom Penh by Industrial Pollution Projection System

Authors: Vibol San, Vin Spoann

Abstract:

Manufacturing plays an important role in job creation around the world. In 2013, it is estimated that there were more than half a billion jobs in manufacturing. In Cambodia in 2015, the primary industry occupies 26.18% of the total economy, while agriculture is contributing 29% and the service sector 39.43%. The number of industrial factories, which are dominated by garment and textiles, has increased since 1994, mainly in Phnom Penh city. Approximately 56% out of total 1302 firms are operated in the Capital city in Cambodia. Industrialization to achieve the economic growth and social development is directly responsible for environmental degradation, threatening the ecosystem and human health issues. About 96% of total firms in Phnom Penh city are the most and moderately polluting firms, which have contributed to environmental concerns. Despite an increasing array of laws, strategies and action plans in Cambodia, the Ministry of Environment has encountered some constraints in conducting the monitoring work, including lack of human and financial resources, lack of research documents, the limited analytical knowledge, and lack of technical references. Therefore, the necessary information on industrial pollution to set strategies, priorities and action plans on environmental protection issues is absent in Cambodia. In the absence of this data, effective environmental protection cannot be implemented. The objective of this study is to estimate industrial pollution load by employing the Industrial Pollution Projection System (IPPS), a rapid environmental management tool for assessment of pollution load, to produce a scientific rational basis for preparing future policy direction to reduce industrial pollution in Phnom Penh city. Due to lack of industrial pollution data in Phnom Penh, industrial emissions to the air, water and land as well as the sum of emissions to all mediums (air, water, land) are estimated using employment economic variable in IPPS. Due to the high number of employees, the total environmental load generated in Phnom Penh city is estimated to be 476.980.93 tons in 2014, which is the highest industrial pollution compared to other locations in Cambodia. The result clearly indicates that Phnom Penh city is the highest emitter of all pollutants in comparison with environmental pollutants released by other provinces. The total emission of industrial pollutants in Phnom Penh shares 55.79% of total industrial pollution load in Cambodia. Phnom Penh city generates 189,121.68 ton of VOC, 165,410.58 ton of toxic chemicals to air, 38,523.33 ton of toxic chemicals to land and 28,967.86 ton of SO2 in 2014. The results of the estimation show that Textile and Apparel sector is the highest generators of toxic chemicals into land and air, and toxic metals into land, air and water, while Basic Metal sector is the highest contributor of toxic chemicals to water. Textile and Apparel sector alone emits 436,015.84 ton of total industrial pollution loads. The results suggest that reduction in industrial pollution could be achieved by focusing on the most polluting sectors.

Keywords: most polluting area, polluting industry, pollution load, pollution intensity

Procedia PDF Downloads 259
981 Time Travel Testing: A Mechanism for Improving Renewal Experience

Authors: Aritra Majumdar

Abstract:

While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.

Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas

Procedia PDF Downloads 158
980 Teaching English as a Foreign Language: Insights from the Philippine Context

Authors: Arlene Villarama, Micol Grace Guanzon, Zenaida Ramos

Abstract:

This paper provides insights into teaching English as a Foreign Language in the Philippines. The authors reviewed relevant theories and literature, and provide an analysis of the issues in teaching English in the Philippine setting in the light of these theories. The authors made an investigation in Bagong Barrio National High School (BBNHS) - a public school in Caloocan City. The institution has a population of nearly 3,000 students. The performances of randomly chosen 365 respondents were scrutinised. The study regarding the success of teaching English as a foreign language to Filipino children were highlighted. This includes the respondents’ family background, surroundings, way of living, and their behavior and understanding regarding education. The results show that there is a significant relationship between demonstrative, communal, and logical areas that touch the efficacy of introducing English as a foreign Dialectal. Filipino children, by nature, are adventurous and naturally joyful even for little things. They are born with natural skills and capabilities to discover new things. They highly consider activities and work that ignite their curiosity. They love to be recognised and are inspired the most when given the assurance of acceptance and belongingness. Fun is the appealing influence to ignite and motivate learning. The magic word is excitement. The study reveals the many facets of the accumulation and transmission of erudition, in introduction and administration of English as a foreign phonological; it runs and passes through different channels of diffusion. Along the way, there are particles that act as obstructions in protocols where knowledge are to be gathered. Data gained from the respondents conceals a reality that is beyond one’s imagination. One significant factor that touches the inefficacy of understanding and using English as a foreign language is an erroneous outset gained from an old belief handed down from generation to generation. This accepted perception about the power and influence of the use of language, gives the novices either a negative or a positive notion. The investigation shows that a higher number of dislikes in the use of English can be tracked down from the belief of the story on how the English language came into existence. The belief that only the great and the influential have the right to use English as a means of communication kills the joy of acceptance. A significant notation has to be examined so as to provide a solution or if not eradicate the misconceptions that lie behind the substance of the matter. The result of the authors’ research depicts a substantial correlation between the emotional (demonstrative), social (communal), and intellectual (logical). The focus of this paper is to bring out the right notation and disclose the misconceptions with regards to teaching English as a foreign language. This will concentrate on the emotional, social, and intellectual areas of the Filipino learners and how these areas affect the transmittance and accumulation of learning. The authors’ aim is to formulate logical ways and techniques that would open up new beginnings in understanding and acceptance of the subject matter.

Keywords: accumulation, behaviour, facets, misconceptions, transmittance

Procedia PDF Downloads 203
979 Parenting Interventions for Refugee Families: A Systematic Scoping Review

Authors: Ripudaman S. Minhas, Pardeep K. Benipal, Aisha K. Yousafzai

Abstract:

Background: Children of refugee or asylum-seeking background have multiple, complex needs (e.g. trauma, mental health concerns, separation, relocation, poverty, etc.) that places them at an increased risk for developing learning problems. Families encounter challenges accessing support during resettlement, preventing children from achieving their full developmental potential. There are very few studies in literature that examine the unique parenting challenges refugee families’ face. Providing appropriate support services and educational resources that address these distinctive concerns of refugee parents, will alleviate these challenges allowing for a better developmental outcome for children. Objective: To identify the characteristics of effective parenting interventions that address the unique needs of refugee families. Methods: English-language articles published from 1997 onwards were included if they described or evaluated programmes or interventions for parents of refugee or asylum-seeking background, globally. Data were extracted and analyzed according to Arksey and O’Malley’s descriptive analysis model for scoping reviews. Results: Seven studies met criteria and were included, primarily studying families settled in high-income countries. Refugee parents identified parenting to be a major concern, citing they experienced: alienation/unwelcoming services, language barriers, and lack of familiarity with school and early years services. Services that focused on building the resilience of parents, parent education, or provided services in the family’s native language, and offered families safe spaces to promote parent-child interactions were most successful. Home-visit and family-centered programs showed particular success, minimizing barriers such as transportation and inflexible work schedules, while allowing caregivers to receive feedback from facilitators. The vast majority of studies evaluated programs implementing existing curricula and frameworks. Interventions were designed in a prescriptive manner, without direct participation by family members and not directly addressing accessibility barriers. The studies also did not employ evaluation measures of parenting practices or the caregiving environment, or child development outcomes, primarily focusing on parental perceptions. Conclusion: There is scarce literature describing parenting interventions for refugee families. Successful interventions focused on building parenting resilience and capacity in their native language. To date, there are no studies that employ a participatory approach to program design to tailor content or accessibility, and few that employ parenting, developmental, behavioural, or environmental outcome measures.

Keywords: asylum-seekers, developmental pediatrics, parenting interventions, refugee families

Procedia PDF Downloads 160