Search results for: structural control system representations
1049 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements
Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo
Abstract:
Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation
Procedia PDF Downloads 1781048 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach
Authors: Kanika Gupta, Ashok Kumar
Abstract:
Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database
Procedia PDF Downloads 1701047 Job Resource, Personal Resource, Engagement and Performance with Balanced Score Card in the Integrated Textile Companies in Indonesia
Authors: Nurlaila Effendy
Abstract:
Companies in Asia face a number of constraints in tight competitiveness in ASEAN Economic Community 2015 and globalization. An economic capitalism system as an integral part of globalization processing brings broad impacts. They need to improve business performance in globalization and ASEAN Economic Community. Organizational development has quite clearly demonstrated that aligning individual’s personal goals with the goals of the organization translates into measurable and sustained performance improvement. Human capital is a key to achieve company performance. Employee Engagement (EE) creates and expresses themselves physically, cognitively and emotionally to achieve company goals and individual goals. One will experience a total involvement when they undertake their jobs and feel a self integration to their job and organization. A leader plays key role in attaining the goals and objectives of a company/organization. Any Manager in a company needs to have leadership competence and global mindset. As one the of positive organizational behavior developments, psychological capital (PsyCap) is assumed to be one of the most important capitals in the global mindset, in addition to intellectual capital and social capital. Textile companies also need to face a number of constraints in tight competitiveness in regional and global. This research involved 42 managers in two textiles and a spinning companies in a group, in Central Java, Indonesia. It is a quantitative research with Partial Least Squares (PLS) studying job resource (Social Support & Organizational Climate) and Personal Resource (4 dimensions of Psychological Capital & Leadership Competence) as prediction of Employee Engagement, also Employee Engagement and leadership competence as prediction of leader’s performance. The performance of a leader is measured by means of achievement on objective strategies in terms of 4 perspectives (financial and non-financial perspectives) in a Balanced Score Card (BSC). It took one year during a business plan of year 2014, from January to December 2014. The result of this research is there is correlation between Job Resource (coefficient value of Social Support is 0.036 & coefficient value of organizational climate is 0.220) and Personal Resource (coefficient value of PsyCap is 0.513 & coefficient value of Leadership Competence is 0.249) with employee engagement. There is correlation between employee engagement (coefficient value is 0.279) and leadership competence (coefficient value is 0.581) with performance.Keywords: organizational climate, social support, psychological capital leadership competence, employee engagement, performance, integrated textile companies
Procedia PDF Downloads 4331046 Interference of Polymers Addition in Wastewaters Microbial Survey: Case Study of Viral Retention in Sludges
Authors: Doriane Delafosse, Dominique Fontvieille
Abstract:
Background: Wastewater treatment plants (WWTPs) generally display significant efficacy in virus retention yet, are sometimes highly variable, partly in relation to large fluctuating loads at the head of the plant and partly because of episodic dysfunctions in some treatment processes. The problem is especially sensitive when human enteric viruses, such as human Noroviruses Genogroup I or Adenoviruses, are in concern: their release downstream WWTP, in environments often interconnected to recreational areas, may be very harmful to human communities even at low concentrations. It points out the importance of WWTP permanent monitoring from which their internal treatment processes could be adjusted. One way to adjust primary treatments is to add coagulants and flocculants to sewage ahead settling tanks to improve decantation. In this work, sludge produced by three coagulants (two organics, one mineral), four flocculants (three cationic, one anionic), and their combinations were studied for their efficacy in human enteric virus retention. Sewage samples were coming from a WWTP in the vicinity of the laboratory. All experiments were performed three times and in triplicates in laboratory pilots, using Murine Norovirus (MNV-1), a surrogate of human Norovirus, as an internal control (spiking). Viruses were quantified by (RT-)qPCR after nucleic acid extraction from both treated water and sediment. Results: Low values of sludge virus retention (from 4 to 8% of the initial sewage concentration) were observed with each cationic organic flocculant added to wastewater and no coagulant. The largest part of the virus load was detected in the treated water (48 to 90%). However, it was not counterbalancing the amount of the introduced virus (MNV-1). The results pertained to two types of cationic flocculants, branched and linear, and in the last case, to two percentages of cations. Results were quite similar to the association of a linear cationic organic coagulant and an anionic flocculant, though suggesting that differences between water and sludges would sometimes be related to virus size or virus origins (autochthonous/allochthonous). FeCl₃, as a mineral coagulant associated with an anionic flocculant, significantly increased both auto- and allochthonous virus retention in the sediments (15 to 34%). Accordingly, virus load in treated water was lower (14 to 48%) but with a total that still does not reach the amount of the introduced virus (MNV-1). It also appeared that the virus retrieval in a bare 0.1M NaCl suspension varied rather strongly according to the FeCl₃ concentration, suggesting an inhibiting effect on the molecular analysis used to detect the virus. Finally, no viruses were detected in both phases (sediment and water) with the combination branched cationic coagulant-linear anionic flocculant, which was later demonstrated as an effect, here also, of polymers on the virus detection-molecular analysis. Conclusions: The combination of FeCl₃-anionic flocculant gave its highest performance to the decantation-based virus removal process. However, large unbalanced values in spiking experiments were observed, suggesting that polymers cast additional obstacles to both elution buffer and lysis buffer on their way to reach the virus. The situation was probably even worse with autochthonous viruses already embedded into sewage's particulate matter. Polymers and FeCl₃ also appeared to interfere in some steps of molecular analyses. More attention should be paid to such impediments wherever chemical additives are considered to be used to enhance WWTP processes. Acknowledgments: This research was supported by the ABIOLAB laboratory (Montbonnot Saint-Martin, France) and by the ASPOSAN association. Field experiments were possible thanks to the Grand Chambéry WWTP authorities (Chambéry, France).Keywords: flocculants-coagulants, polymers, enteric viruses, wastewater sedimentation treatment plant
Procedia PDF Downloads 1251045 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 1551044 Changes in Cognition of Elderly People: A Longitudinal Study in Kanchanaburi Province, Thailand
Authors: Natchaphon Auampradit, Patama Vapattanawong, Sureeporn Punpuing, Malee Sunpuwan, Tawanchai Jirapramukpitak
Abstract:
Longitudinal studies related to cognitive impairment in elderly are necessary for health promotion and development. The purposes of this study were (1) to examine changes in cognition of elderly over time and (2) to examine the impacts of changes in social determinants of health (SDH) toward changes in cognition of elderly by using the secondary data derived from the Kanchanaburi Demographic Surveillance System (KDSS) by the Institute for Population and Social Research (IPSR) which contained longitudinal data on individuals, households, and villages. Two selected projects included the Health and Social Support for Elderly in KDSS in 2007 and the Population, Economic, Social, Cultural, and Long-term Care Surveillance for Thai Elderly People’s Health Promotion in 2011. The samples were 586 elderly participated in both projects. SDH included living arrangement, social relationships with children, relatives, and friends, household asset-based wealth index, household monthly income, loans for livings, loans for investment, and working status. Cognitive impairment was measured by category fluency and delayed recall. This study employed Generalized Estimating Equation (GEE) model to investigate changes in cognition by taking SDH and other variables such as age, gender, marital status, education, and depression into the model. The unstructured correlation structure was selected to use for analysis. The results revealed that 24 percent of elderly had cognitive impairment at baseline. About 13 percent of elderly still had cognitive impairment during 2007 until 2011. About 21 percent and 11 percent of elderly had cognitive decline and cognitive improvement, respectively. The cross-sectional analysis showed that household asset-based wealth index, social relationship with friends, working status, age, marital status, education, and depression were significantly associated with cognitive impairment. The GEE model revealed longitudinal effects of household asset-based wealth index and working status against cognition during 2007 until 2011. There was no longitudinal effect of social conditions against cognition. Elderly living with richer household asset-based wealth index, still being employed, and being younger were less likely to have cognitive impairment. The results strongly suggested that poorer household asset-based wealth index and being unemployed were served as a risk factor for cognitive impairment over time. Increasing age was still the major risk for cognitive impairment as well.Keywords: changes in cognition, cognitive impairment, elderly, KDSS, longitudinal study
Procedia PDF Downloads 1411043 Ensuring Sustainable Urban Mobility in Indian Cities: Need for Creating People Friendly Roadside Public Spaces
Authors: Pushplata Garg
Abstract:
Mobility, is an integral part of living and sustainability of urban mobility, is essential not only for, but also for addressing global warming and climate change. However, very little is understood about the obstacles/hurdles and likely challenges in the success of plans for sustainable urban mobility in Indian cities from the public perspective. Whereas some of the problems and issues are common to all cities, others vary considerably with financial status, function, the size of cities and culture of a place. Problems and issues similar in all cities relate to availability, efficiency and safety of public transport, last mile connectivity, universal accessibility, and essential planning and design requirements of pedestrians and cyclists are same. However, certain aspects like the type of means of public transportation, priority for cycling and walking, type of roadside activities, are influenced by the size of the town, average educational and income level of public, financial status of the local authorities, and culture of a place. The extent of public awareness, civic sense, maintenance of public spaces and law enforcement vary significantly from large metropolitan cities to small and medium towns in countries like India. Besides, design requirements for shading, location of public open spaces and sitting areas, street furniture, landscaping also vary depending on the climate of the place. Last mile connectivity plays a major role in success/ effectiveness of public transport system in a city. In addition to the provision of pedestrian footpaths connecting important destinations, sitting spaces and necessary amenities/facilities along footpaths; pedestrian movement to public transit stations is encouraged by the presence of quality roadside public spaces. It is not only the visual attractiveness of streetscape or landscape or the public open spaces along pedestrian movement channels but the activities along that make a street vibrant and attractive. These along with adequate spaces to rest and relax encourage people to walk as is observed in cities with successful public transportation systems. The paper discusses problems and issues of pedestrians for last mile connectivity in the context of Delhi, Chandigarh, Gurgaon, and Roorkee- four Indian cities representing varying urban contexts, that is, of metropolitan, large and small cities.Keywords: pedestrianisation, roadside public spaces, last mile connectivity, sustainable urban mobility
Procedia PDF Downloads 2511042 Transportation and Urban Land-Use System for the Sustainability of Cities, a Case Study of Muscat
Authors: Bader Eddin Al Asali, N. Srinivasa Reddy
Abstract:
Cities are dynamic in nature and are characterized by concentration of people, infrastructure, services and markets, which offer opportunities for production and consumption. Often growth and development in urban areas is not systematic, and is directed by number of factors like natural growth, land prices, housing availability, job locations-the central business district (CBD’s), transportation routes, distribution of resources, geographical boundaries, administrative policies, etc. One sided spatial and geographical development in cities leads to the unequal spatial distribution of population and jobs, resulting in high transportation activity. City development can be measured by the parameters such as urban size, urban form, urban shape, and urban structure. Urban Size is the city size and defined by the population of the city, and urban form is the location and size of the economic activity (CBD) over the geographical space. Urban shape is the geometrical shape of the city over which the distribution of population and economic activity occupied. And Urban Structure is the transport network within which the population and activity centers are connected by hierarchy of roads. Among the urban land-use systems transportation plays significant role and is one of the largest energy consuming sector. Transportation interaction among the land uses is measured in Passenger-Km and mean trip length, and is often used as a proxy for measurement of energy consumption in transportation sector. Among the trips generated in cities, work trips constitute more than 70 percent. Work trips are originated from the place of residence and destination to the place of employment. To understand the role of urban parameters on transportation interaction, theoretical cities of different size and urban specifications are generated through building block exercise using a specially developed interactive C++ programme and land use transportation modeling is carried. The land-use transportation modeling exercise helps in understanding the role of urban parameters and also to classify the cities for their urban form, structure, and shape. Muscat the capital city of Oman underwent rapid urbanization over the last four decades is taken as a case study for its classification. Also, a pilot survey is carried to capture urban travel characteristics. Analysis of land-use transportation modeling with field data classified Muscat as a linear city with polycentric CBD. Conclusions are drawn suggestion are given for policy making for the sustainability of Muscat City.Keywords: land-use transportation, transportation modeling urban form, urban structure, urban rule parameters
Procedia PDF Downloads 2701041 Social Business Evaluation in Brazil: Analysis of Entrepreneurship and Investor Practices
Authors: Erica Siqueira, Adriana Bin, Rachel Stefanuto
Abstract:
The paper aims to identify and to discuss the impact and results of ex-ante, mid-term and ex-post evaluation initiatives in Brazilian Social Enterprises from the point of view of the entrepreneurs and investors, highlighting the processes involved in these activities and their aftereffects. The study was conducted using a descriptive methodology, primarily qualitative. A multiple-case study was used, and, for that, semi-structured interviews were conducted with ten entrepreneurs in the (i) social finance, (ii) education, (iii) health, (iv) citizenship and (v) green tech fields, as well as three representatives of various impact investments, which are (i) venture capital, (ii) loan and (iii) equity interest areas. Convenience (non-probabilistic) sampling was adopted to select both businesses and investors, who voluntarily contributed to the research. The evaluation is still incipient in most of the studied business cases. Some stand out by adopting well-known methodologies like Global Impact Investing Report System (GIIRS), but still, have a lot to improve in several aspects. Most of these enterprises use nonexperimental research conducted by their own employees, which is ordinarily not understood as 'golden standard' to some authors in the area. Nevertheless, from the entrepreneur point of view, it is possible to identify that most of them including those routines in some extent in their day-by-day activities, despite the difficulty they have of the business in general. In turn, the investors do not have overall directions to establish evaluation initiatives in respective enterprises; they are funding. There is a mechanism of trust, and this is, usually, enough to prove the impact for all stakeholders. The work concludes that there is a large gap between what the literature states in regard to what should be the best practices in these businesses and what the enterprises really do. The evaluation initiatives must be included in some extension in all enterprises in order to confirm social impact that they realize. Here it is recommended the development and adoption of more flexible evaluation mechanisms that consider the complexity involved in these businesses’ routines. The reflections of the research also suggest important implications for the field of Social Enterprises, whose practices are far from what the theory preaches. It highlights the risk of the legitimacy of these enterprises that identify themselves as 'social impact', sometimes without the proper proof based on causality data. Consequently, this makes the field of social entrepreneurship fragile and susceptible to questioning, weakening the ecosystem as a whole. In this way, the top priorities of these enterprises must be handled together with the results and impact measurement activities. Likewise, it is recommended to perform further investigations that consider the trade-offs between impact versus profit. In addition, research about gender, the entrepreneur motivation to call themselves as Social Enterprises, and the possible unintended consequences from these businesses also should be investigated.Keywords: evaluation practices, impact, results, social enterprise, social entrepreneurship ecosystem
Procedia PDF Downloads 1191040 Interlayer-Mechanical Working: Effective Strategy to Mitigate Solidification Cracking in Wire-Arc Additive Manufacturing (WAAM) of Fe-based Shape Memory Alloy
Authors: Soumyajit Koley, Kuladeep Rajamudili, Supriyo Ganguly
Abstract:
In recent years, iron-based shape-memory alloys have been emerging as an inexpensive alternative to costly Ni-Ti alloy and thus considered suitable for many different applications in civil structures. Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy contains 37 wt.% of total solute elements. Such complex multi-component metallurgical system often leads to severe solute segregation and solidification cracking. Wire-arc additive manufacturing (WAAM) of Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy was attempted using a cold-wire fed plasma arc torch attached to a 6-axis robot. Self-standing walls were manufactured. However, multiple vertical cracks were observed after deposition of around 15 layers. Microstructural characterization revealed open surfaces of dendrites inside the crack, confirming these cracks as solidification cracks. Machine hammer peening (MHP) process was adopted on each layer to cold work the newly deposited alloy. Effect of MHP traverse speed were varied systematically to attain a window of operation where cracking was completely stopped. Microstructural and textural analysis were carried out further to correlate the peening process to microstructure.MHP helped in many ways. Firstly, a compressive residual stress was induced on each layer which countered the tensile residual stress evolved from solidification process; thus, reducing net tensile stress on the wall along its length. Secondly, significant local plastic deformation from MHP followed by the thermal cycle induced by deposition of next layer resulted into a recovered and recrystallized equiaxed microstructure instead of long columnar grains along the vertical direction. This microstructural change increased the total crack propagation length and thus, the overall toughness. Thirdly, the inter-layer peening significantly reduced the strong cubic {001} crystallographic texture formed along the build direction. Cubic {001} texture promotes easy separation of planes and easy crack propagation. Thus reduction of cubic texture alleviates the chance of cracking.Keywords: Iron-based shape-memory alloy, wire-arc additive manufacturing, solidification cracking, inter-layer cold working, machine hammer peening
Procedia PDF Downloads 721039 Integrating Machine Learning and Rule-Based Decision Models for Enhanced B2B Sales Forecasting and Customer Prioritization
Authors: Wenqi Liu, Reginald Bailey
Abstract:
This study proposes a comprehensive and effective approach to business-to-business (B2B) sales forecasting by integrating advanced machine learning models with a rule-based decision-making framework. The methodology addresses the critical challenge of optimizing sales pipeline performance and improving conversion rates through predictive analytics and actionable insights. The first component involves developing a classification model to predict the likelihood of conversion, aiming to outperform traditional methods such as logistic regression in terms of accuracy, precision, recall, and F1 score. Feature importance analysis highlights key predictive factors, such as client revenue size and sales velocity, providing valuable insights into conversion dynamics. The second component focuses on forecasting sales value using a regression model, designed to achieve superior performance compared to linear regression by minimizing mean absolute error (MAE), mean squared error (MSE), and maximizing R-squared metrics. The regression analysis identifies primary drivers of sales value, further informing data-driven strategies. To bridge the gap between predictive modeling and actionable outcomes, a rule-based decision framework is introduced. This model categorizes leads into high, medium, and low priorities based on thresholds for conversion probability and predicted sales value. By combining classification and regression outputs, this framework enables sales teams to allocate resources effectively, focus on high-value opportunities, and streamline lead management processes. The integrated approach significantly enhances lead prioritization, increases conversion rates, and drives revenue generation, offering a robust solution to the declining pipeline conversion rates faced by many B2B organizations. Our findings demonstrate the practical benefits of blending machine learning with decision-making frameworks, providing a scalable, data-driven solution for strategic sales optimization. This study underscores the potential of predictive analytics to transform B2B sales operations, enabling more informed decision-making and improved organizational outcomes in competitive markets.Keywords: machine learning, XGBoost, regression, decision making framework, system engineering
Procedia PDF Downloads 171038 21st Century Business Dynamics: Acting Local and Thinking Global through Extensive Business Reporting Language (XBRL)
Authors: Samuel Faboyede, Obiamaka Nwobu, Samuel Fakile, Dickson Mukoro
Abstract:
In the present dynamic business environment of corporate governance and regulations, financial reporting is an inevitable and extremely significant process for every business enterprise. Several financial elements such as Annual Reports, Quarterly Reports, ad-hoc filing, and other statutory/regulatory reports provide vital information to the investors and regulators, and establish trust and rapport between the internal and external stakeholders of an organization. Investors today are very demanding, and emphasize greatly on authenticity, accuracy, and reliability of financial data. For many companies, the Internet plays a key role in communicating business information, internally to management and externally to stakeholders. Despite high prominence being attached to external reporting, it is disconnected in most companies, who generate their external financial documents manually, resulting in high degree of errors and prolonged cycle times. Chief Executive Officers and Chief Financial Officers are increasingly susceptible to endorsing error-laden reports, late filing of reports, and non-compliance with regulatory acts. There is a lack of common platform to manage the sensitive information – internally and externally – in financial reports. The Internet financial reporting language known as eXtensible Business Reporting Language (XBRL) continues to develop in the face of challenges and has now reached the point where much of its promised benefits are available. This paper looks at the emergence of this revolutionary twenty-first century language of digital reporting. It posits that today, the world is on the brink of an Internet revolution that will redefine the ‘business reporting’ paradigm. The new Internet technology, eXtensible Business Reporting Language (XBRL), is already being deployed and used across the world. It finds that XBRL is an eXtensible Markup Language (XML) based information format that places self-describing tags around discrete pieces of business information. Once tags are assigned, it is possible to extract only desired information, rather than having to download or print an entire document. XBRL is platform-independent and it will work on any current or recent-year operating system, or any computer and interface with virtually any software. The paper concludes that corporate stakeholders and the government cannot afford to ignore the XBRL. It therefore recommends that all must act locally and think globally now via the adoption of XBRL that is changing the face of worldwide business reporting.Keywords: XBRL, financial reporting, internet, internal and external reports
Procedia PDF Downloads 2861037 Progressive Damage Analysis of Mechanically Connected Composites
Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan
Abstract:
While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values , and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.Keywords: puck, finite element, bolted joint, composite
Procedia PDF Downloads 1021036 Flow Field Optimization for Proton Exchange Membrane Fuel Cells
Authors: Xiao-Dong Wang, Wei-Mon Yan
Abstract:
The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection
Procedia PDF Downloads 2961035 Recent Policy Changes in Israeli Early Childhood Frameworks: Hope for the Future
Authors: Yaara Shilo
Abstract:
Early childhood education and care (ECEC)in Israel has undergone extensive reform and now requires daycare centers to meet internationally recognized professional standards. Since 1948, one of the aims of childcare facilities was to enable women’s participation in the workforce.A 1965 law grouped daycare centers for young children with facilities for the elderly and for disabled persons under the same authority. In the 1970’s, ECEC leaders sought to change childcare from proprietary to educational facilities. From 1976 deliberations in the Knesset regarding appropriate attribution of ECEC frameworks resulted in their being moved to various authorities that supported women’s employment: Ministries of Finance, Industry, and Commerce, as well as the Welfare Department. Prior to 2018, 75% of infants and toddlers in institutional care were in unlicensed and unsupervised settings. Legislative processes accompanied the conceptual change to an eventual appropriate attribution of ECEC frameworks. Position papers over the past two decades resulted in recommendations for standards conforming to OECD regulations. Simultaneous incidents of child abuse, some resulting in death, riveted public attention to the need for adequate government supervision, accelerating the legislative process. Appropriate care for very young children must center on quality interactions with caregivers, thus requiring adequate staff training. Finally, in 2018 a law was passed stipulating standards for staff training, proper facilities, child-adult ratios, and safety measures. The Ariav commission expanded training to caregivers for ages 0-3. Transfer of the ECEC to the Ministry of Education ensured establishment of basic training. Groundwork created by new legislation initiated professional development of EC educators for ages 0-3. This process should raise salaries and bolster the system’s ability to attract quality employees. In 2022 responsibility for ECEC ages 0-3 was transferred from the Ministry of Finance to the Ministry of Education, shifting emphasis from proprietary care to professional considerations focusing on wellbeing and early childhood education. The recent revolutionary changes in ECEC point to a new age in the care and education of Israel’s youngest citizens. Implementation of international standards, adequate training, and professionalization of the workforce focus on the child’s needs.Keywords: policy, early childhood, care and education, daycare, development
Procedia PDF Downloads 1151034 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization
Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder
Abstract:
In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening
Procedia PDF Downloads 3011033 Energy Efficiency Measures in Canada’s Iron and Steel Industry
Authors: A. Talaei, M. Ahiduzzaman, A. Kumar
Abstract:
In Canada, an increase in the production of iron and steel is anticipated for satisfying the increasing demand of iron and steel in the oil sands and automobile industries. It is predicted that GHG emissions from iron and steel sector will show a continuous increase till 2030 and, with emissions of 20 million tonnes of carbon dioxide equivalent, the sector will account for more than 2% of total national GHG emissions, or 12% of industrial emissions (i.e. 25% increase from 2010 levels). Therefore, there is an urgent need to improve the energy intensity and to implement energy efficiency measures in the industry to reduce the GHG footprint. This paper analyzes the current energy consumption in the Canadian iron and steel industries and identifies energy efficiency opportunities to improve the energy intensity and mitigate greenhouse gas emissions from this industry. In order to do this, a demand tree is developed representing different iron and steel production routs and the technologies within each rout. The main energy consumer within the industry is found to be flared heaters accounting for 81% of overall energy consumption followed by motor system and steam generation each accounting for 7% of total energy consumption. Eighteen different energy efficiency measures are identified which will help the efficiency improvement in various subsector of the industry. In the sintering process, heat recovery from coolers provides a high potential for energy saving and can be integrated in both new and existing plants. Coke dry quenching (CDQ) has the same advantages. Within the blast furnace iron-making process, injection of large amounts of coal in the furnace appears to be more effective than any other option in this category. In addition, because coal-powered electricity is being phased out in Ontario (where the majority of iron and steel plants are located) there will be surplus coal that could be used in iron and steel plants. In the steel-making processes, the recovery of Basic Oxygen Furnace (BOF) gas and scrap preheating provides considerable potential for energy savings in BOF and Electric Arc Furnace (EAF) steel-making processes, respectively. However, despite the energy savings potential, the BOF gas recovery is not applicable in existing plants using steam recovery processes. Given that the share of EAF in steel production is expected to increase the application potential of the technology will be limited. On the other hand, the long lifetime of the technology and the expected capacity increase of EAF makes scrap preheating a justified energy saving option. This paper would present the results of the assessment of the above mentioned options in terms of the costs and GHG mitigation potential.Keywords: Iron and Steel Sectors, Energy Efficiency Improvement, Blast Furnace Iron-making Process, GHG Mitigation
Procedia PDF Downloads 3971032 Human Resource Management Practices and Employee Retention in Public Higher Learning Institutions in the Maldives
Authors: Shaheeb Abdul Azeez, Siong-Choy Chong
Abstract:
Background: Talent retention is increasingly becoming a major challenge for many industries due to the high turnover rate. Public higher learning institutions in the Maldives have a similar situation with the turnover of their employees'. This paper is to identify whether Human Resource Management (HRM) practices have any impact on employee retention in public higher learning institutions in the Maldives. Purpose: This paper aims to identify the influence of HRM practices on employee retention in public higher learning institutions in the Maldives. A total of 15 variables used in this study; 11 HRM practices as independent variables (leadership, rewards, salary, employee participation, compensation, training and development, career development, recognition, appraisal system and supervisor support); job satisfaction and motivation as mediating variables; demographic profile as moderating variable and employee retention as dependent variable. Design/Methodology/Approach: A structured self-administered questionnaire was used for data collection. A total of 300 respondents were selected as the study sample, representing the academic and administrative from public higher learning institutions using a stratified random sampling method. AMOS was used to test the hypotheses constructed. Findings: The results suggest that there is no direct effect between the independent variable and dependent variable. Also, the study concludes that no moderate effects of demographic profile between independent and dependent variables. However, the mediating effects of job satisfaction and motivation in the relationship between HRM practices and employee retention were significant. Salary had a significant influence on job satisfaction, whilst both compensation and recognition have significant influence on motivation. Job satisfaction and motivation were also found to significantly influence employee retention. Research Limitations: The study consists of many variables more time consuming for the respondents to answer the questionnaire. The study is focussed only on public higher learning institutions in the Maldives due to no participation from the private sector higher learning institutions. Therefore, the researcher is unable to identify the actual situation of the higher learning industry in the Maldives. Originality/Value: To our best knowledge, no study has been conducted using the same framework throughout the world. This study is the initial study conducted in the Maldives in this study area and can be used as a baseline for future researches. But there are few types of research conducted on the same subject throughout the world. Some of them concluded with positive findings while others with negative findings. Also, they have used 4 to 7 HRM practices as their study framework.Keywords: human resource management practices, employee retention, motivation, job satisfaction
Procedia PDF Downloads 1561031 Health Communication and the Diabetes Narratives of Key Social Media Influencers in the UK
Authors: Z. Sun
Abstract:
Health communication is essential in promoting healthy lifestyles, managing disease conditions, and eventually reducing health disparities. The key elements of successful health communication always include the development of communication strategies to engage people in thinking about their health, inform them about healthy choices, persuade them to adopt safe and healthy behaviours, and eventually achieve public health objectives. The use of 'Narrative' is recognised as a kind of health communication strategy to enhance personal and public health due to its potential persuasive effect in motivating and supporting individuals change their beliefs and behaviours by inviting them into a narrative world, breaking down their cognitive and emotional resistance and enhance their acceptance of the ideas portrayed in narratives. Meanwhile, the popularity of social media has provided a novel means of communication for both healthcare stakeholders, and a special group of active social media users (influencers) have started playing a pivotal role in providing health ‘solutions’. Such individuals are often referred to as ‘influencers’ because of their central position in the online communication system and the persuasive effect their actions may have on audiences. They may have established a positive rapport with their audience, earned trust and credibility in a specific area, and thus, their audience considers the information they delivered to be authentic and influential. To our best knowledge, to date, there is no published research that examines the effect of diabetes narratives presented by social media influencers and their impacts on health-related outcomes. The primary aim of this study is to investigate the diabetes narratives presented by social media influencers in the UK because of the new dimension they bring to health communication and the potential impact they may have on audiences' health outcomes. This study is situated within the interpretivist and narrative paradigms. A mixed methodology combining both quantitative and qualitative approaches has been adopted. Qualitative data has been derived to provide a better understanding of influencers’ personal experiences and how they construct meanings and make sense of their world, while quantitative data has been accumulated to identify key social media influencers in the UK and measure the impact of diabetes narratives on audiences. Twitter has been chosen as the social media platform to initially identify key influencers. Two groups of participants are the top 10 key social media influencers in the UK and 100 audiences of each influencer, which means a total of 1000 audiences have been invited. This paper is going to discuss, first of all, the background of the research under the context of health communication; Secondly, the necessity and contribution of this research; then, the major research questions being explored; and finally, the methods to be used.Keywords: diabetes, health communication, narratives, social media influencers
Procedia PDF Downloads 1041030 Conservation Agriculture under Mediterranean Climate: Effects on below and Above-Ground Processes during Wheat Cultivation
Authors: Vasiliki Kolake, Christos Kavalaris, Sofia Megoudi, Maria Maxouri, Panagiotis A. Karas, Aris Kyparissis, Efi Levizou
Abstract:
Conservation agriculture (CA), is a production system approach that can tackle the challenges of climate change mainly through facilitating carbon storage into the soil and increasing crop resilience. This is extremely important for the vulnerable Mediterranean agroecosystems, which already face adverse environmental conditions. The agronomic practices used in CA, i.e. permanent soil cover and no-tillage, result in reduced soil erosion and increased soil organic matter, preservation of water and improvement of quality and fertility of the soil in the long-term. Thus the functional characteristics and processes of the soil are considerably affected by the implementation of CA. The aim of the present work was to assess the effects of CA on soil nitrification potential and mycorrhizal colonization about the above-ground production in a wheat field. Two adjacent but independent field sites of 1.5ha each were used (Thessaly plain, Central Greece), comprising the no-till and conventional tillage treatments. The no-tillage site was covered by residues of the previous crop (cotton). Potential nitrification and the nitrate and ammonium content of the soil were measured at two different soil depths (3 and 15cm) at 20-days intervals throughout the growth period. Additionally, the leaf area index (LAI) was monitored at the same time-course. The mycorrhizal colonization was measured at the commencement and end of the experiment. At the final harvest, total yield and plant biomass were also recorded. The results indicate that wheat yield was considerably favored by CA practices, exhibiting a 42% increase compared to the conventional tillage treatment. The superior performance of the CA crop was also depicted in the above-ground plant biomass, where a 26% increase was recorded. LAI, which is considered a reliable growth index, did not show statistically significant differences between treatments throughout the growth period. On the contrary, significant differences were recorded in endomycorrhizal colonization one day before the final harvest, with CA plants exhibiting 20% colonization, while the conventional tillage plants hardly reached 1%. The on-going analyses of potential nitrification measurements, as well as nitrate and ammonium determination, will shed light on the effects of CA on key processes in the soil. These results will integrate the assessment of CA impact on certain below and above-ground processes during wheat cultivation under the Mediterranean climate.Keywords: conservation agriculture, LAI, mycorrhizal colonization, potential nitrification, wheat, yield
Procedia PDF Downloads 1301029 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 1591028 Practice of Social Innovation in School Education: A Study of Third Sector Organisations in India
Authors: Prakash Chittoor
Abstract:
In the recent past, it is realised especially in third sector that employing social innovation is crucial for achieving viable and long lasting social transformation. In this context, education is one among many sectors that have opened up itself for such move where employing social innovation emerges as key for reaching out to the excluded sections who are often failed to get support from either policy or market interventions. In fact, education is being as a crucial factor for social development is well understood at both academic and policy level. In order to move forward to achieve better results, interventions from multiple sectors may be required as its reach cultivates capabilities and skill of the deprived in order to ensure both market and social participation in the long run. Despite state’s intervention, it is found that still millions of children are out of school due to lack of political will, lapses in policy implementation and neoliberal intervention of marketization. As a result, universalisation of elementary education became as an elusive goal to poor and marginalised sections where state obtain constant pressure by corporate sector to withdraw from education sector that led convince in providing quality education. At this juncture, the role of third sector organizations plays is quite remarkable. Especially, it has evolved as a key player in education sector to reach out to the poor and marginalised in the far-flung areas. These organisations work in resources constrain environment, yet, in order to achieve larger social impact they adopt various social innovations from time to time to reach out to the unreached. Their attempts not only limited to just approaching the unreached children but to retain them for long-time in the schooling system in order to ripe the results for their families and communities. There is a need to highlight various innovative ways adopted and practiced by the third sector organisations in India to achieve the elusive goal of universal access of primary education with quality. With this background, the paper primarily attempts to present an in-depth understanding about innovative practices employed by third sectors organisations like Isha Vidya through government schools adoption programme in India where it engages itself with government and build capabilities among the government teachers to promote state run schooling with quality and better infrastructure. Further, this paper assess whether such innovative attempts succeeded in to achieving universal quality education in the areas where it operates and draws implications for State policy.Keywords: school education, third sector organisations, social innovation, market domination
Procedia PDF Downloads 2621027 Analysis of the Content of Sugars, Vitamin C, Preservatives, Synthetic Dyes, Sweeteners, Sodium and Potassium and Microbiological Purity in Selected Products Made From Fruit and Vegetables in Small Regional Factories and in Large Food Corporations
Authors: Katarzyna Miśkiewicz, Magdalena Lasoń-Rydel, Małgorzata Krępska, Katarzyna Sieczyńska, Iwona Masłowska-Lipowicz, Katarzyna Ławińska
Abstract:
The aim of the study was to analyse a selection of 12 pasteurised products made from fruit and vegetables, such as fruit juices, fruit drinks, jams, marmalades and jam produced by small regional factories as well as large food corporations. The research was carried out as part of the project "Innovative system of healthy and regional food distribution", funded by the Ministry of Education and Science (Poland), which aims to create an economically and organisationally strong agri-food industry in Poland through effective cooperation between scientific and socio-economic actors. The main activities of the project include support for the creation of new distribution channels for regional food products and their easy access to a wide group of potential customers while maintaining the highest quality standards. One of the key areas of the project is food quality analyses conducted to indicate the competitive advantage of regional products. Presented here are studies on the content of sugars, vitamin C, preservatives, synthetic colours, sweeteners, sodium and potassium, as well as studies on the microbiological purity of selected products made from fruit and vegetables. The composition of products made from fruit and vegetables varies greatly and depends on both the type of raw material and the way it is processed. Of the samples tested, fruit drinks contained the least amount of sugars, and jam and marmalade made by large producers and bought in large chain stores contained the most. However, the low sugar content of some fruit drinks is due to the presence of the sweetener sucralose in their composition. The vitamin C content of the samples varied, being higher in products where it was added during production. All products made in small local factories were free of food additives such as preservatives, sweeteners and synthetic colours, indicating their superiority over products made by large producers. Products made in small local factories were characterised by a relatively high potassium content. The microbiological purity of commercial products was confirmed - no Salmonella spp. were detected, and the number of mesophilic bacteria, moulds, yeasts, and β-glucuronidase-positive E. coli was below the limit of quantification.Keywords: fruit and vegetable products, sugars, food additives, HPLC, ICP-OES
Procedia PDF Downloads 941026 Retrospective Analysis Demonstrates No Difference in Percutaneous Native Renal Biopsy Adequacy Between Nephrologists and Radiologists in University Hospital Crosshouse
Authors: Nicole Harley, Mahmoud Eid, Abdurahman Tarmal, Vishal Dey
Abstract:
Histological sampling plays an integral role in the diagnostic process of renal diseases. Percutaneous native renal biopsy is typically performed under ultrasound guidance, with this service usually being provided by nephrologists. In some centers, there is a role for radiologists in performing renal biopsies. Previous comparative studies have demonstrated non-inferiority between outcomes of percutaneous native renal biopsies performed by nephrologists compared with radiologists. We sought to compare biopsy adequacy between nephrologists and radiologists in University Hospital Crosshouse. The online system SERPR (Scottish Electronic Renal Patient Record) contains information pertaining to patients who have undergone renal biopsies. An online search was performed to acquire a list of all patients who underwent renal biopsy between 2013 and 2020 in University Hospital Crosshouse. 355 native renal biopsies were performed in total across this 7-year period. A retrospective analysis was performed on these cases, with records and reports being assessed for: the total number of glomeruli obtained per biopsy, whether the number of glomeruli obtained was adequate for diagnosis, as per an internationally agreed standard, and whether a histological diagnosis was achieved. Nephrologists performed 43.9% of native renal biopsies (n=156) and radiologists performed 56.1% (n=199). The mean number of glomeruli obtained by nephrologists was 17.16+/-10.31. The mean number of glomeruli obtained by radiologists was 18.38+/-10.55. T-test demonstrated no statistically significant difference between specialties comparatively (p-value 0.277). Native renal biopsies are required to obtain at least 8 glomeruli to be diagnostic as per internationally agreed criteria. Nephrologists met these criteria in 88.5% of native renal biopsies (n=138) and radiologists met this criteria in 89.5% (n=178). T-test and Chi-squared analysis demonstrate there was no statistically significant difference between the specialties comparatively (p-value 0.663 and 0.922, respectively). Biopsies performed by nephrologists yielded tissue that was diagnostic in 91.0% (n=142) of sampling. Biopsies performed by radiologists yielded tissue that was diagnostic in 92.4% (n=184) of sampling. T-test and Chi-squared analysis demonstrate there was no statistically significant difference between the specialties comparatively (p-value 0.625 and 0.889, respectively). This project demonstrates that at University Hospital Crosshouse, there is no statistical difference between radiologists and nephrologists in terms of glomeruli acquisition or samples achieving a histological diagnosis. Given the non-inferiority between specialties demonstrated by previous studies and this project, this evidence could support the restructuring of services to allow more renal biopsies to be performed by renal services and allow reallocation of radiology department resources.Keywords: biopsy, medical imaging, nephrology, radiology
Procedia PDF Downloads 821025 Cosmic Radiation Hazards and Protective Strategies in Space Exploration
Authors: Mehrnaz Mostafavi, Alireza Azani, Mahtab Shabani, Fatemeh Ghafari
Abstract:
While filled with promise and wonder, space exploration also presents significant challenges, one of the foremost being the threat of cosmic radiation to astronaut health. Recent advancements in assessing these risks and developing protective strategies have shed new light on this issue. Cosmic radiation encompasses a variety of high-energy particles originating from sources like solar particle events, galactic cosmic rays, and cosmic rays from beyond the solar system. These particles, composed of protons, electrons, and heavy ions, pose a substantial threat to human health in space due to the lack of Earth's protective atmosphere and magnetic field. Researchers have made significant progress in assessing the risks associated with cosmic radiation exposure. By employing advanced dosimetry techniques and conducting biological studies, they have gained insights into how cosmic radiation affects astronauts' health, including increasing the risk of cancer and radiation sickness. This research has led to personalized risk assessment methods tailored to individual astronaut profiles. Distinctive protection strategies have been proposed to combat the dangers of cosmic radiation. These include developing spacecraft shielding materials and designs to enhance radiation protection. Additionally, researchers are exploring pharmacological interventions such as radioprotective drugs and antioxidant therapies to mitigate the biological effects of radiation exposure and preserve astronaut well-being. The findings from recent research have significant implications for the future of space exploration. By advancing our understanding of cosmic radiation risks and developing effective protection strategies, we pave the way for safer and more sustainable human missions beyond Earth's orbit. This is especially crucial for long-duration missions to destinations like Mars, where astronauts will face prolonged exposure to cosmic radiation. In conclusion, recent research has marked a milestone in addressing the challenges posed by cosmic radiation in space exploration. By delving into the complexities of cosmic radiation exposure and developing innovative protection strategies, scientists are ensuring the health and resilience of astronauts as they venture into the vast expanse of the cosmos. Continued research and collaboration in this area are essential for overcoming the cosmic radiation challenge and enabling humanity to embark on new frontiers of exploration and discovery in space.Keywords: Space exploration, cosmic radiation, astronaut health, risk assessment, protective strategies
Procedia PDF Downloads 801024 A Post-Colonial Reading of Maria Edgeworth's Anglo-Irish Novels: Castle Rackrent and the Absentee
Authors: Al. Harshan, Hazamah Ali Mahdi
Abstract:
The Big House literature embodies Irish history. It requires a special dimension of moral and social significance in relation to its owners. The Big House is a metaphor for the decline of the protestant Ascendancy that ruled in a catholic country and oppressed a native people. In the tradition of the Big House fiction, Maria Edgeworth's Castle Rackrent and the Absentee explore the effect of the Anglo-Irish protestant Ascendancy as it governed and misgoverned Ireland. Edgeworth illustrates the tradition of the Big House as a symbol of both a personal and historical theme. This paper provides a reading of Castle Rackrent and The Absentee from a post-colonial perspective. The paper maintains that Edgeworth's novel contain elements of a radical critique of the colonialist enterprise. In our postcolonial reading of Maria Edgeworth's novels, one that goes beyond considering works as those of Sir Walter Scoot, regional evidence has been found of Edgeworth's colonial ideology. The significance of Castle Rackrent lies mainly in the fact that is the first English novel to speak in the voice of the colonized Irish. What is more important is that the irony and the comic aspect of the novel comes from its Irish narrator (Thady Quirk) and its Irish setting Ireland. Edgeworth reveals the geographical 'other' to her English reader, by placing her colonized Irish narrator and his son, Jason Quirk, in a position of inferiority to emphasize the gap between Englishness and Irishness. Furthermore, this satirical aspect is a political one. It works to create and protect the superiority of the domestic English reader over the Irish subject. In other words, the implication of the colonial system of the novel and of its structure of dominance and subordination is overlooked by its comic dimension. The matrimonial plot in the Absentee functions as an imperial plot, constructing Ireland as a complementary but ever unequal partner in the family of Great Britain. This imperial marriage works hegemonically to produce the domestic stability considered so crucial to national and colonial stability. Moreover, in order to achieve her proper imperial plot, Edgeworth reconciliation of England and Ireland is seen in the marriage of the Anglo-Irish (hero/Colambre) with the Irish (heroine/Grace Nugent), and the happy bourgeois family; consequently, it becomes the model for colonizer-colonized relationships. Edgeworth must establish modes of legitimate behavior for women and men. The Absentee explains more purposely how familial reorganization is dependent on the restitution of masculine authority and advantage, particularly for Irish community.Keywords: Maria Edgeworth, post-colonial, reading, Irish
Procedia PDF Downloads 5441023 Risk Assessment of Lead Element in Red Peppers Collected from Marketplaces in Antalya, Southern Turkey
Authors: Serpil Kilic, Ihsan Burak Cam, Murat Kilic, Timur Tongur
Abstract:
Interest in the lead (Pb) has considerably increased due to knowledge about the potential toxic effects of this element, recently. Exposure to heavy metals above the acceptable limit affects human health. Indeed, Pb is accumulated through food chains up to toxic concentrations; therefore, it can pose an adverse potential threat to human health. A sensitive and reliable method for determination of Pb element in red pepper were improved in the present study. Samples (33 red pepper products having different brands) were purchased from different markets in Turkey. The selected method validation criteria (linearity, Limit of Detection, Limit of Quantification, recovery, and trueness) demonstrated. Recovery values close to 100% showed adequate precision and accuracy for analysis. According to the results of red pepper analysis, all of the tested lead element in the samples was determined at various concentrations. A Perkin- Elmer ELAN DRC-e model ICP-MS system was used for detection of Pb. Organic red pepper was used to obtain a matrix for all method validation studies. The certified reference material, Fapas chili powder, was digested and analyzed, together with the different sample batches. Three replicates from each sample were digested and analyzed. The results of the exposure levels of the elements were discussed considering the scientific opinions of the European Food Safety Authority (EFSA), which is the European Union’s (EU) risk assessment source associated with food safety. The Target Hazard Quotient (THQ) was described by the United States Environmental Protection Agency (USEPA) for the calculation of potential health risks associated with long-term exposure to chemical pollutants. THQ value contains intake of elements, exposure frequency and duration, body weight and the oral reference dose (RfD). If the THQ value is lower than one, it means that the exposed population is assumed to be safe and 1 < THQ < 5 means that the exposed population is in a level of concern interval. In this study, the THQ of Pb was obtained as < 1. The results of THQ calculations showed that the values were below one for all the tested, meaning the samples did not pose a health risk to the local population. This work was supported by The Scientific Research Projects Coordination Unit of Akdeniz University. Project Number: FBA-2017-2494.Keywords: lead analyses, red pepper, risk assessment, daily exposure
Procedia PDF Downloads 1671022 Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 1071021 The Flooding Management Strategy in Urban Areas: Reusing Public Facilities Land as Flood-Detention Space for Multi-Purpose
Authors: Hsiao-Ting Huang, Chang Hsueh-Sheng
Abstract:
Taiwan is an island country which is affected by the monsoon deeply. Under the climate change, the frequency of extreme rainstorm by typhoon becomes more and more often Since 2000. When the extreme rainstorm comes, it will cause serious damage in Taiwan, especially in urban area. It is suffered by the flooding and the government take it as the urgent issue. On the past, the land use of urban planning does not take flood-detention into consideration. With the development of the city, the impermeable surface increase and most of the people live in urban area. It means there is the highly vulnerability in the urban area, but it cannot deal with the surface runoff and the flooding. However, building the detention pond in hydraulic engineering way to solve the problem is not feasible in urban area. The land expropriation is the most expensive construction of the detention pond in the urban area, and the government cannot afford it. Therefore, the management strategy of flooding in urban area should use the existing resource, public facilities land. It can archive the performance of flood-detention through providing the public facilities land with the detention function. As multi-use public facilities land, it also can show the combination of the land use and water agency. To this purpose, this research generalizes the factors of multi-use for public facilities land as flood-detention space with literature review. The factors can be divided into two categories: environmental factors and conditions of public facilities. Environmental factors including three factors: the terrain elevation, the inundation potential and the distance from the drainage system. In the other hand, there are six factors for conditions of public facilities, including area, building rate, the maximum of available ratio etc. Each of them will be according to it characteristic to given the weight for the land use suitability analysis. This research selects the rules of combination from the logical combination. After this process, it can be classified into three suitability levels. Then, three suitability levels will input to the physiographic inundation model for simulating the evaluation of flood-detention respectively. This study tries to respond the urgent issue in urban area and establishes a model of multi-use for public facilities land as flood-detention through the systematic research process of this study. The result of this study can tell which combination of the suitability level is more efficacious. Besides, The model is not only standing on the side of urban planners but also add in the point of view from water agency. Those findings may serve as basis for land use indicators and decision-making references for concerned government agencies.Keywords: flooding management strategy, land use suitability analysis, multi-use for public facilities land, physiographic inundation model
Procedia PDF Downloads 3581020 Cricket Injury Surveillence by Mobile Application Technology on Smartphones
Authors: Najeebullah Soomro, Habib Noorbhai, Mariam Soomro, Ross Sanders
Abstract:
The demands on cricketers are increasing with more matches being played in a shorter period of time with a greater intensity. A ten year report on injury incidence for Australian elite cricketers between the 2000- 2011 seasons revealed an injury incidence rate of 17.4%.1. In the 2009–10 season, 24 % of Australian fast bowlers missed matches through injury. 1 Injury rates are even higher in junior cricketers with an injury incidence of 25% or 2.9 injuries per 100 player hours reported. 2 Traditionally, injury surveillance has relied on the use of paper based forms or complex computer software. 3,4 This makes injury reporting laborious for the staff involved. The purpose of this presentation is to describe a smartphone based mobile application as a means of improving injury surveillance in cricket. Methods: The researchers developed CricPredict mobile App for the Android platforms, the world’s most widely used smartphone platform. It uses Qt SDK (Software Development Kit) as IDE (Integrated Development Environment). C++ was used as the programming language with the Qt framework, which provides us with cross-platform abilities that will allow this app to be ported to other operating systems (iOS, Mac, Windows) in the future. The wireframes (graphic user interface) were developed using Justinmind Prototyper Pro Edition Version (Ver. 6.1.0). CricPredict enables recording of injury and training status conveniently and immediately. When an injury is reported automated follow-up questions include site of injury, nature of injury, mechanism of injury, initial treatment, referral and action taken after injury. Direct communication with the player then enables assessment of severity and diagnosis. CricPredict also allows the coach to maintain and track each player’s attendance at matches and training session. Workload data can also be recorded by either the player or coach by recording the number of balls bowled or played in a day. This is helpful in formulating injury rates and time lost due to injuries. All the data are stored at a secured password protected data server. Outcomes and Significance: Use of CricPredit offers a simple, user friendly tool for the coaching or medical staff associated with teams to predict, record and report injuries. This system will assist teams to capture injury data with ease thus allowing better understanding of injuries associated with cricket and potentially optimize the performance of such cricketers.Keywords: injury, cricket, surveillance, smartphones, mobile
Procedia PDF Downloads 459