Search results for: wide dynamic range
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11466

Search results for: wide dynamic range

1386 Working Capital Management Practices in Small Businesses in Victoria

Authors: Ranjith Ihalanayake, Lalith Seelanatha, John Breen

Abstract:

In this study, we explored the current working capital management practices as applied in small businesses in Victoria, filling an existing theoretical and empirical gap in literature in general and in Australia in particular. Amidst the current global competitive and dynamic environment, the short term insolvency of small businesses is very critical for the long run survival. A firm’s short-term insolvency is dependent on the availability of sufficient working capital for feeding day to day operational activities. Therefore, given the reliance for short-term funding by small businesses, it has been recognized that the efficient management of working capital is crucial in respect of the prosperity and survival of such firms. Against this background, this research was an attempt to understand the current working capital management strategies and practices used by the small scale businesses. To this end, we conducted an internet survey among 220 small businesses operating in Victoria, Australia. The survey results suggest that the majority of respondents are owner-manager (73%) and male (68%). Respondents participated in this survey mostly have a degree (46%). About a half of respondents are more than 50 years old. Most of respondents (64%) have business management experience more than ten years. Similarly, majority of them (63%) had experience in the area of their current business. Types of business of the respondents are: Private limited company (41%), sole proprietorship (37%), and partnership (15%). In addition, majority of the firms are service companies (63%), followed by retailed companies (25%), and manufacturing (17%). Size of companies of this survey varies, 32% of them have annual sales $100,000 or under, while 22% of them have revenue more than $1,000,000 every year. In regards to the total assets, majority of respondents (43%) have total assets $100,000 or less while 20% of respondents have total assets more than $1,000,000. In regards to WCMPs, results indicate that almost 70% of respondents mentioned that they are responsible for managing their business working capital. The survey shows that majority of respondents (65.5%) use their business experience to identify the level of investment in working capital, compared to 22% of respondents who seek advice from professionals. The other 10% of respondents, however, follow industry practice to identify the level of working capital. The survey also shows that more than a half of respondents maintain good liquidity financial position for their business by having accounts payable less than accounts receivable. This study finds that majority of small business companies in western area of Victoria have a WCM policy but only about 8 % of them have a formal policy. Majority of the businesses (52.7%) have an informal policy while 39.5% have no policy. Of those who have a policy, 44% described their working capital management policies as a compromise policy while 35% described their policy as a conservative policy. Only 6% of respondents apply aggressive policy. Overall the results indicate that the small businesses pay less attention into the management of working capital of their business despite its significance in the successful operation of the business. This approach may be adopted during favourable economic times. However, during relatively turbulent economic conditions, such an approach could lead to greater financial difficulties i.e. short-term financial insolvency.

Keywords: small business, working capital management, Australia, sufficient, financial insolvency

Procedia PDF Downloads 347
1385 Epidemiological Analysis of the Patients Supplied with Foot Orthoses in Ortho-Prosthetic Center of Kosovo

Authors: Ardiana Murtezani, Ilirijana Dallku, Teuta Osmani Vllasolli, Sabit Sllamniku

Abstract:

Background: The use of foot orthoses are always indicated when there are alterations of the optimal biomechanics' position of the foot. Orthotics are very effective and very suitable for the majority of patients with pain due to overload which can be related to biomechanical disorders. Aim: To assess the frequency of patients requiring foot orthoses, type of orthoses and analysis of their disease leading to the use of foot orthoses. Material and Methods: Our study included 128 patients with various foot pathologies, treated at the outpatient department of the Ortho-Prosthetic Center of Kosovo (OPCK) in Prishtina. Prospective-descriptive clinical method was used during this study. Functional status of patients was examined, and the following parameters are noted: range of motion measurements for the affected joints/lower extremities, manual test for muscular strength below the knee and foot of the affected extremity, perimeter measurements of the lower extremities, measurements of lower extremities, foot length measurement, foot width measurements and size. In order to complete the measurements the following instruments are used: plantogram, pedogram, meter and cork shoe lift appliances. Results: The majority of subjects in this study are male (60.2% vs. 39.8%), and the dominant age group was 0-9 (47.7%), 61 subjects respectively. Most frequent foot disorders were: congenital disease 60.1%, trauma cases 13.3%, consequences from rheumatologic disease 12.5%, neurologic dysfunctions 11.7%, and the less frequented are the infectious cases 1.6%. Congenital anomalies were the most frequent cases, and from this group majority of cases suffered from pes planovalgus (37.5%), eqinovarus (15.6%) and discrepancies between extremities (6.3%). Furthermore, traumatic amputations (2.3%) and arthritis (0.8%). As far as neurologic disease, subjects with cerebral palsy are represented with (3.1%), peroneal nerve palsy (2.3%) and hemiparesis (1.6%). Infectious disease osteomyelitis sequels are represented with (1.6%). Conclusion: Based on our study results, we have concluded that the use of foot orthoses for patients suffering from rheumatoid arthritis and nonspecific arthropaty was effective treatment choice, leading to decrease of pain, less deformities and improves the quality of life.

Keywords: orthoses, epidemiological analysis, rheumatoid arthritis, rehabilitation

Procedia PDF Downloads 223
1384 Sustainable Urban Growth of Neighborhoods: A Case Study of Alryad-Khartoum

Authors: Zuhal Eltayeb Awad

Abstract:

Alryad neighborhood is located in Khartoum town– the administrative center of the Capital of Sudan. The neighborhood is one of the high-income residential areas with villa type development of low-density. It was planned and developed in 1972 with large plots (600-875m²), wide crossing roads and balanced environment. Recently the area transformed into more compact urban form of high density, mixed-use integrated development with more intensive use of land; multi-storied apartments. The most important socio-economic process in the neighborhood has been the commercialization and deinitialization of the area in connect with the displacement of the residential function. This transformation affected the quality of the neighborhood and the inter-related features of the built environment. A case study approach was chosen to gather the necessary qualitative and quantitative data. A detailed survey on existing development pattern was carried out over the whole area of Alryad. Data on the built and social environment of the neighborhoods were collected through observations, interviews and secondary data sources. The paper reflected a theoretical and empirical interest in the particular characteristics of compact neighborhood with high density, and mixed land uses and their effect on social wellbeing of the residents all in the context of the sustainable development. The research problem is focused on the challenges of transformation that associated with compact neighborhood that created multiple urban problems, e.g., stress of essential services (water supply, electricity, and drainage), congestion of streets and demand for parking. The main objective of the study is to analyze the transformation of this area from residential use to commercial and administrative use. The study analyzed the current situation of the neighborhood compared to the five principles of sustainable neighborhood prepared by UN Habitat. The study found that the neighborhood is experienced changes that occur to inner-city residential areas and the process of change of the neighborhood was originated by external forces due to the declining economic situation of the whole country. It is evident that non-residential uses have taken place uncontrolled, unregulated and haphazardly that led to damage the residential environment and deficiency in infrastructure. The quality of urban life and in particular on levels of privacy was reduced, the neighborhood changed gradually to be a central business district that provides services to the whole Khartoum town. The change of house type may be attributed to a demand-led housing market and absence of policy. The results showed that Alryad is not fully sustainable and self-contained, street network characteristics and mixed land-uses development are compatible with the principles of sustainability. The area of streets represents 27.4% of the total area of the neighborhood. Residential density is 4,620 people/ km², that is lower than the recommendations, and the limited block land-use specialization is higher than 10% of the blocks. Most inhabitants have a high income so that there is no social mix in the neighborhood. The study recommended revision of the current zoning regulations in order to control and regulate undesirable development in the neighborhood and provide new solutions which allow promoting the neighborhood sustainable development.

Keywords: compact neighborhood, land uses, mixed use, residential area, transformation

Procedia PDF Downloads 123
1383 Investigation of a Single Feedstock Particle during Pyrolysis in Fluidized Bed Reactors via X-Ray Imaging Technique

Authors: Stefano Iannello, Massimiliano Materazzi

Abstract:

Fluidized bed reactor technologies are one of the most valuable pathways for thermochemical conversions of biogenic fuels due to their good operating flexibility. Nevertheless, there are still issues related to the mixing and separation of heterogeneous phases during operation with highly volatile feedstocks, including biomass and waste. At high temperatures, the volatile content of the feedstock is released in the form of the so-called endogenous bubbles, which generally exert a “lift” effect on the particle itself by dragging it up to the bed surface. Such phenomenon leads to high release of volatile matter into the freeboard and limited mass and heat transfer with particles of the bed inventory. The aim of this work is to get a better understanding of the behaviour of a single reacting particle in a hot fluidized bed reactor during the devolatilization stage. The analysis has been undertaken at different fluidization regimes and temperatures to closely mirror the operating conditions of waste-to-energy processes. Beechwood and polypropylene particles were used to resemble the biomass and plastic fractions present in waste materials, respectively. The non-invasive X-ray technique was coupled to particle tracking algorithms to characterize the motion of a single feedstock particle during the devolatilization with high resolution. A high-energy X-ray beam passes through the vessel where absorption occurs, depending on the distribution and amount of solids and fluids along the beam path. A high-speed video camera is synchronised to the beam and provides frame-by-frame imaging of the flow patterns of fluids and solids within the fluidized bed up to 72 fps (frames per second). A comprehensive mathematical model has been developed in order to validate the experimental results. Beech wood and polypropylene particles have shown a very different dynamic behaviour during the pyrolysis stage. When the feedstock is fed from the bottom, the plastic material tends to spend more time within the bed than the biomass. This behaviour can be attributed to the presence of the endogenous bubbles, which drag effect is more pronounced during the devolatilization of biomass, resulting in a lower residence time of the particle within the bed. At the typical operating temperatures of thermochemical conversions, the synthetic polymer softens and melts, and the bed particles attach on its outer surface, generating a wet plastic-sand agglomerate. Consequently, this additional layer of sand may hinder the rapid evolution of volatiles in the form of endogenous bubbles, and therefore the establishment of a poor drag effect acting on the feedstock itself. Information about the mixing and segregation of solid feedstock is of prime importance for the design and development of more efficient industrial-scale operations.

Keywords: fluidized bed, pyrolysis, waste feedstock, X-ray

Procedia PDF Downloads 161
1382 Decision Support System for Hospital Selection in Emergency Medical Services: A Discrete Event Simulation Approach

Authors: D. Tedesco, G. Feletti, P. Trucco

Abstract:

The present study aims to develop a Decision Support System (DSS) to support the operational decision of the Emergency Medical Service (EMS) regarding the assignment of medical emergency requests to Emergency Departments (ED). In the literature, this problem is also known as “hospital selection” and concerns the definition of policies for the selection of the ED to which patients who require further treatment are transported by ambulance. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning DSSs to support the EMS management and, in particular, the hospital selection decision. From the literature analysis, it emerged that current studies are mainly focused on the EMS phases related to the ambulance service and consider a process that ends when the ambulance is available after completing a request. Therefore, all the ED-related issues are excluded and considered as part of a separate process. Indeed, the most studied hospital selection policy turned out to be proximity, thus allowing to minimize the transport time and release the ambulance in the shortest possible time. The purpose of the present study consists in developing an optimization model for assigning medical emergency requests to the EDs, considering information relating to the subsequent phases of the process, such as the case-mix, the expected service throughput times, and the operational capacity of different EDs in hospitals. To this end, a Discrete Event Simulation (DES) model was created to evaluate different hospital selection policies. Therefore, the next steps of the research consisted of the development of a general simulation architecture, its implementation in the AnyLogic software and its validation on a realistic dataset. The hospital selection policy that produced the best results was the minimization of the Time To Provider (TTP), considered as the time from the beginning of the ambulance journey to the ED at the beginning of the clinical evaluation by the doctor. Finally, two approaches were further compared: a static approach, which is based on a retrospective estimate of the TTP, and a dynamic approach, which is based on a predictive estimate of the TTP determined with a constantly updated Winters model. Findings reveal that considering the minimization of TTP as a hospital selection policy raises several benefits. It allows to significantly reduce service throughput times in the ED with a minimum increase in travel time. Furthermore, an immediate view of the saturation state of the ED is produced and the case-mix present in the ED structures (i.e., the different triage codes) is considered, as different severity codes correspond to different service throughput times. Besides, the use of a predictive approach is certainly more reliable in terms of TTP estimation than a retrospective approach but entails a more difficult application. These considerations can support decision-makers in introducing different hospital selection policies to enhance EMSs performance.

Keywords: discrete event simulation, emergency medical services, forecast model, hospital selection

Procedia PDF Downloads 84
1381 Improve Divers Tracking and Classification in Sonar Images Using Robust Diver Wake Detection Algorithm

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Harbor protection systems are so important. The need for automatic protection systems has increased over the last years. Diver detection active sonar has great significance. It used to detect underwater threats such as divers and autonomous underwater vehicle. To automatically detect such threats the sonar image is processed by algorithms. These algorithms used to detect, track and classify of underwater objects. In this work, divers tracking and classification algorithm is improved be proposing a robust wake detection method. To detect objects the sonar images is normalized then segmented based on fixed threshold. Next, the centroids of the segments are found and clustered based on distance metric. Then to track the objects linear Kalman filter is applied. To reduce effect of noise and creation of false tracks, the Kalman tracker is fine tuned. The tuning is done based on our active sonar specifications. After the tracks are initialed and updated they are subjected to a filtering stage to eliminate the noisy and unstable tracks. Also to eliminate object with a speed out of the diver speed range such as buoys and fast boats. Afterwards the result tracks are subjected to a classification stage to deiced the type of the object been tracked. Here the classification stage is to deice wither if the tracked object is an open circuit diver or a close circuit diver. At the classification stage, a small area around the object is extracted and a novel wake detection method is applied. The morphological features of the object with his wake is extracted. We used support vector machine to find the best classifier. The sonar training images and the test images are collected by ARMELSAN Defense Technologies Company using the portable diver detection sonar ARAS-2023. After applying the algorithm to the test sonar data, we get fine and stable tracks of the divers. The total classification accuracy achieved with the diver type is 97%.

Keywords: harbor protection, diver detection, active sonar, wake detection, diver classification

Procedia PDF Downloads 229
1380 Correlative Study of Serum Interleukin-18 and Disease Activity, Functional Disability and Quality of Life in Rheumatoid Arthritis Patients

Authors: Hamdy Khamis Korayem, Manal Yehia Tayel, Abeer Shawky El Hadedy, Emmanuel Kamal Aziz Saba, Shimaa Badr Abdelnaby Badr

Abstract:

The aim of the current study was to demonstrate whether serum Interleukin-18 (IL-18) is increased in rheumatoid arthritis (RA) and its correlation with disease activity, functional disability and quality of life in RA patients. The study included 30 RA patients and 20 healthy normal control subjects. The RA patients were diagnosed according to the 2010 ACR/EULAR classification criteria for RA with the exclusion of those who had diabetes mellitus, endocrine disorders, associated rheumatologic diseases, viral hepatitis B or C and other diseases with increased serum IL-18 level. All patients were subjected to clinical evaluation of the musculoskeletal system. Disease activity was assessed by disease activity score 28 with 4 variables (DAS 28). Functional disability was assessed by health assessment questionnaire disability index (HAQ-DI). The quality of life was assessed by Short form-36 (SF-36) questionnaire. Radiological assessment of both hands and feet by Sharp/van der Heijde (SvH) scoring method. Laboratory parameters including erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), rheumatoid factor (RF) and anti-cyclic citrullinated peptide antibody (ACPA) were assessed in patients and serum level of IL-18 in both patients and control subjects. There was no statistically significant difference between patient and control group as regards age and sex. Among patients, 29 % were females and the age range was between 25 to 55 years. Extra-articular manifestations were presented in 56.7% of the patients. The mean of DAS 28 score was 5.73±1.46 and that of HAQ-DI was 1.22±0.72 while that of SF-36 was 40.03±13.96. The level of serum IL-18 was significantly higher in patients than in the control subjects (P= 0.030). Serum IL-18 was correlated with ACPA among the patient group. There were no statistically significant correlations between serum IL-18 and DAS28, HAQ-DI, SF-36, total SvH score and the other laboratory results. In conclusion, IL-18 is significantly higher in RA patient than in healthy control subjects and positively correlated with ACPA level. IL-18 is associated with extra-articular manifestations. However, it is not correlated with other laboratory parameters, disease activity, functional disability, quality of life nor radiological severity.

Keywords: disease activity score, Interleukin-18, quality of life assessment, rheumatoid arthritis

Procedia PDF Downloads 319
1379 Determination of Bromides, Chlorides and Fluorides in Case of Their Joint Presence in Ion-Conducting Electrolyte

Authors: V. Golubeva, O. Vakhnina, I. Konopkina, N. Gerasimova, N. Taturina, K. Zhogova

Abstract:

To improve chemical current sources, the ion-conducting electrolytes based on Li halides (LiCl-KCl, LiCl-LiBr-KBr, LiCl-LiBr-LiF) are developed. It is necessary to have chemical analytical methods for determination of halides to control the electrolytes technology. The methods of classical analytical chemistry are of interest, as they are characterized by high accuracy. Using these methods is a difficult task because halides have similar chemical properties. The objective of this work is to develop a titrimetric method for determining the content of bromides, chlorides, and fluorides in their joint presence in an ion-conducting electrolyte. In accordance with the developed method of analysis to determine fluorides, electrolyte sample is dissolved in diluted HCl acid; fluorides are titrated by La(NO₃)₃ solution with potentiometric indication of equivalence point, fluoride ion-selective electrode is used as sensor. Chlorides and bromides do not form a hardly soluble compound with La and do not interfere in result of analysis. To determine the bromides, the sample is dissolved in a diluted H₂SO₄ acid. The bromides are oxidized with a solution of KIO₃ to Br₂, which is removed from the reaction zone by boiling. Excess of KIO₃ is titrated by iodometric method. The content of bromides is calculated from the amount of KIO₃ spent on Br₂ oxidation. Chlorides and fluorides are not oxidized by KIO₃ and do not interfere in result of analysis. To determine the chlorides, the sample is dissolved in diluted HNO₃ acid and the total content of chlorides and bromides is determined by method of visual mercurometric titration with diphenylcarbazone indicator. Fluorides do not form a hardly soluble compound with mercury and do not interfere with determination. The content of chlorides is calculated taking into account the content of bromides in the sample of electrolyte. The validation of the developed analytical method was evaluated by analyzing internal reference material with known chlorides, bromides and fluorides content. The analytical method allows to determine chlorides, bromides and fluorides in case of their joint presence in ion-conducting electrolyte within the range and with relative total error (δ): for bromides from 60.0 to 65.0 %, δ = ± 2.1 %; for chlorides from 8.0 to 15.0 %, δ = ± 3.6 %; for fluorides from 5.0 to 8.0%, ± 1.5% . The analytical method allows to analyze electrolytes and mixtures that contain chlorides, bromides, fluorides of alkali metals and their mixtures (K, Na, Li).

Keywords: bromides, chlorides, fluorides, ion-conducting electrolyte

Procedia PDF Downloads 115
1378 Bridging Binaries: Exploring Students' Conceptions of Good Teaching within Teacher-Centered and Learner-Centered Pedagogies of Their Teachers in Disadvantaged Public Schools in the Philippines

Authors: Julie Lucille H. Del Valle

Abstract:

To improve its public school education, the Philippines took a radical curriculum reform in 2012, by launching the K-to-12 program which not only added two years to its basic education but also mandated for a replacement of traditional teaching with learner-centered pedagogy, an instruction whose western underpinnings suggest improving student achievement, thus, making pedagogies in the country more or less similar with those in Europe and USA. This policy, however, placed learner-centered pedagogy in a binary opposition against teacher-centered instruction, creating a simplistic dichotomy between good and bad teaching. It is in this dichotomy that this study seeks to explore, using Critical Pedagogy of the Place as the lens, in understanding what constitutes good teaching across a range of learner-centered and teacher-centered pedagogies in the context of public schools in disadvantaged communities. Furthermore, this paper examines how pedagogical homogeneity, arguably influenced by dominant global imperatives with economic agenda – often referred as economisation of education – not only thins out local identities as structures of global schooling become increasingly similar but also limits the concept of good teaching to student outcomes and corporate employability. This paper draws from qualitative research on students, thus addressing the gap created by studies on good teaching which looked mainly into the perceptions of teachers and administrators, while overlooking those of students whose voices must be considered in the formulation of inclusive policies that advocate for true education reform. Using ethnographic methods including student focus groups, classroom observations, and teacher interviews, responses from students of disadvantaged schools reveal that good teaching includes both learner-centered and teacher-centered practices that incorporate ‘academic caring’ which sustains their motivation to achieve in school despite the challenging learning environments. The combination of these two pedagogies equips students with life-long skills necessary to gain equal access to sustainable economic opportunities in their local communities.

Keywords: critical pedagogy of the place, good teaching, learner-centered pedagogy, placed-based instruction

Procedia PDF Downloads 248
1377 Improved Intracellular Protein Degradation System for Rapid Screening and Quantitative Study of Essential Fungal Proteins in Biopharmaceutical Development

Authors: Patarasuda Chaisupa, R. Clay Wright

Abstract:

The selection of appropriate biomolecular targets is a crucial aspect of biopharmaceutical development. The Auxin-Inducible Degron Degradation (AID) technology has demonstrated remarkable potential in efficiently and rapidly degrading target proteins, thereby enabling the identification and acquisition of drug targets. The AID system also offers a viable method to deplete specific proteins, particularly in cases where the degradation pathway has not been exploited or when the adaptation of proteins, including the cell environment, occurs to compensate for the mutation or gene knockout. In this study, we have engineered an improved AID system tailored to deplete proteins of interest. This AID construct combines the auxin-responsive E3 ubiquitin ligase binding domain, AFB2, and the substrate degron, IAA17, fused to the target genes. Essential genes of fungi with the lowest percent amino acid similarity to human and plant orthologs, according to the Basic Local Alignment Search Tool (BLAST), were cloned into the AID construct in S. cerevisiae (AID-tagged strains) using a modular yeast cloning toolkit for multipart assembly and direct genetic modification. Each E3 ubiquitin ligase and IAA17 degron was fused to a fluorescence protein, allowing for real-time monitoring of protein levels in response to different auxin doses via cytometry. Our AID system exhibited high sensitivity, with an EC50 value of 0.040 µM (SE = 0.016) for AFB2, enabling the specific promotion of IAA17::target protein degradation. Furthermore, we demonstrate how this improved AID system enhances quantitative functional studies of various proteins in fungi. The advancements made in auxin-inducible protein degradation in this study offer a powerful approach to investigating critical target protein viability in fungi, screening protein targets for drugs, and regulating intracellular protein abundance, thus revolutionizing the study of protein function underlying a diverse range of biological processes.

Keywords: synthetic biology, bioengineering, molecular biology, biotechnology

Procedia PDF Downloads 78
1376 Barriers to Current Mental Health Assessment in India

Authors: Suantak Demkhosei Vaiphei

Abstract:

Mental illness is still considered as an illness not to be treated, resulting India becoming the most depressed country in the world. At present, 150 million Indians are suffering from mental illness and desperately in need of immediate care assessment for their mental health condition. However, only 0.06 per cent of India’s health budget is devoted to mental health treatment, in which the available data suggests that the state of spending the sanctioned budget in this regard is abysmal. Lack of awareness, ignorance, social stigma, and discriminations becomes the underlying factors for worsening the individual mental health conditions. Unfortunately, India becomes the most depressed country in the world, which is hugely affected by anxiety, schizophrenia, and bipolar disorder followed by China and USA as per the latest World Health Organization report. The National Care of Medical Health stated that at least 6.5 per cent of the Indian populations are under serious mental disorder both in the rural and the urban areas’Mental health is the integral part of health and can be affected by a range of psychosocial-economic factors that need comprehensive strategically approach for promotion, prevention, treatment, and recovery. In a low- and middle-income country like India, the advance progress in mental health service is visible consistently slow and minimal. Some of the major barriers can be seen in the existing public health priorities and its influence on funding; challenges to delivery of basic mental health care in the primary care settings; the minimal numbers of well-trained professionals in the area of mental health care; and lack of mental health perspective in public-health leadership. The existing barriers according to WHO (2007) are; lack of funding for mental health services is the core barrier in implementing quality mental health services, including inadequate coordinated and consensus based national mental health advocacy and plans, the absence of mental health in major donor priorities, marketing of expensive pharmaceuticals by industry, cost-effectiveness information on mental health services that is unknown to senior decision-makers and social stigma among others. Moreover, lack of strong mental health advocacy in countries to increase resources for mental health services and the role of social stigma and the view that mental health is a private responsibility are also the two barriers to mental health.

Keywords: mental health, depression, stigma, barriers

Procedia PDF Downloads 58
1375 Towards a Strategic Framework for State-Level Epistemological Functions

Authors: Mark Darius Juszczak

Abstract:

While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.

Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program

Procedia PDF Downloads 69
1374 Signal Processing Techniques for Adaptive Beamforming with Robustness

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.

Keywords: adaptive beamforming, robustness, signal blocking, steering angle error

Procedia PDF Downloads 117
1373 Production of Medicinal Bio-active Amino Acid Gamma-Aminobutyric Acid In Dairy Sludge Medium

Authors: Farideh Tabatabaee Yazdi, Fereshteh Falah, Alireza Vasiee

Abstract:

Introduction: Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is widely present in organisms. GABA is a kind of pharmacological and biological component and its application is wide and useful. Several important physiological functions of GABA have been characterized, such as neurotransmission and induction of hypotension. GABA is also a strong secretagogue of insulin from the pancreas and effectively inhibits small airway-derived lung adenocarcinoma and tranquilizer. Many microorganisms can produce GABA, and lactic acid bacteria have been a focus of research in recent years because lactic acid bacteria possess special physiological activities and are generally regarded as safe. Among them, the Lb. Brevis produced the highest amount of GABA. The major factors affecting GABA production have been characterized, including carbon sources and glutamate concentration. The use of food industry waste to produce valuable products such as amino acids seems to be a good way to reduce production costs and prevent the waste of food resources. In a dairy factory, a high volume of sludge is produced from a separator that contains useful compounds such as growth factors, carbon, nitrogen, and organic matter that can be used by different microorganisms such as Lb.brevis as carbon and nitrogen sources. Therefore, it is a good source of GABA production. GABA is primarily formed by the irreversible α-decarboxylation reaction of L-glutamic acid or its salts, catalysed by the GAD enzyme. In the present study, this aim was achieved for the fast-growing of Lb.brevis and producing GABA, using the dairy industry sludge as a suitable growth medium. Lactobacillus Brevis strains obtained from Microbial Type Culture Collection (MTCC) were used as model strains. In order to prepare dairy sludge as a medium, sterilization should be done at 121 ° C for 15 minutes. Lb. Brevis was inoculated to the sludge media at pH=6 and incubated for 120 hours at 30 ° C. After fermentation, the supernatant solution is centrifuged and then, the GABA produced was analyzed by the Thin Layer chromatography (TLC) method qualitatively and by the high-performance liquid chromatography (HPLC) method quantitatively. By increasing the percentage of dairy sludge in the culture medium, the amount of GABA increased. Also, evaluated the growth of bacteria in this medium showed the positive effect of dairy sludge on the growth of Lb.brevis, which resulted in the production of more GABA. GABA-producing LAB offers the opportunity of developing naturally fermented health-oriented products. Although some GABA-producing LAB has been isolated to find strains suitable for different fermentations, further screening of various GABA-producing strains from LAB, especially high-yielding strains, is necessary. The production of lactic acid, bacterial gamma-aminobutyric acid, is safe and eco-friendly. The use of dairy industry waste causes enhanced environmental safety. Also provides the possibility of producing valuable compounds such as GABA. In general, dairy sludge is a suitable medium for the growth of Lactic Acid Bacteria and produce this amino acid that can reduce the final cost of it by providing carbon and nitrogen source.

Keywords: GABA, Lactobacillus, HPLC, dairy sludge

Procedia PDF Downloads 130
1372 Activity of Commonly Used Intravenous Nutrient and Bisolvon in Neonatal Intensive Care Units against Biofilm Cells and Their Synergetic Effect with Antibiotics

Authors: Marwa Fady Abozed, Hemat Abd El Latif, Fathy Serry, Lotfi El Sayed

Abstract:

The purpose of this study was to investigate the efficacy of intravenous nutrient(soluvit, vitalipid, aminoven infant, lipovenos) and bisolvon commonly used in neonatal intensive care units against biofilm cells of staphylococcus aureus, Staphylococcus epidermidis, Pseudomonas aerguinosa and klebseilla pneumonia as they are the most commonly isolated organisms and are biofilm producers. Also, the synergetic acticity of soluvit, heparin, bisolvon with antibiotics and its effect on minimum biofilm eradication concentration(MBEC) was tested. Intravenous nutrient and bromohexine are widely used in newborns. Numbers of viable cell count released from biofilm after treatment with intravenous nutrient and bromohexine were counted to compare the efficacy. The percentage of reduction in biofilm regrowth in case of using soluvit was 43-51% and 36-42 % for Gram positive and Gram negative respectively, on adding the vitalipid the percentage was 45-50 %and 37-41% for Gram positive and Gram negative respectively. While, in case of using bisolvon the percentage was 46-52% and 47-48% for Gram positive and Gram negative respectively. Adding lipovenos had a reduction percentage of 48-52% and 48-49% for Gram positive and Gram negative respectively. While, adding aminoven infant the percentage was 10-15% and 9-11% for Gram positive and Gram negative respectively. Adding soluvit, heparin and bisolvon to antibiotics had synergic effect. soluvit with ciprofloxacin has 8-16 times decrease than minimum biofilm eradication concentration (MBEC) for ciprofloxacin alone. While, by adding soluvit to vancomycin the MBEC reduced by 16 times than MBEC of vancomycin alone. In case of combination soluvit with cefotaxime, amikacin and gentamycin the reduction in MBEC was 16, 8 and 6-32 times respectively. The synergetic effect of adding heparin to ciprofloxacin, vancomycin, cefotaxime, amikacin and gentamicin was 2 times reduction with all except in case of gram negative the range of reduction was 0-2 with both gentamycin and ciprofloxacin. Bisolvon exihited synergetic effect with ciprofloxacin, vancomycin, cefotaxime, amikacin and gentamicin by 16, 32, 32, 8, 32-64 and 32 times decrease in MBEC respectively.

Keywords: biofilm, neonatal intensive care units, antibiofilm agents, intravenous nutrient

Procedia PDF Downloads 322
1371 Communication Strategies of Russian-English Asymmetric Bilinguals Given Insufficient Language Faculty

Authors: Varvara Tyurina

Abstract:

In the age of globalization Internet communication as a new format of interactions have become an integral part of our daily routine. Internet environment allows for new conditions and provides participants to a communication act with extra communication tools which can be used on Internet forums or in chat rooms. As a result communicants tend to alternate their behavior patterns in contrast to those practiced in live communication. It is not yet clear which communication strategies participants to Internet communication abide by and what determines their choices. Given the continually changing environment of a forum or a chat the behavior of a communicant can be interpreted in terms of autopoiesis theory which sees adaptation as the major tool for coexistence between the living system and its niche. Each communication act is seen as interaction between the communicant (i.e. the living system) and the overall environment of the forum (i.e. the niche) rather than one particular interlocutor. When communicating via the Internet participants are believed to aim at reaching a balance between themselves and the environment of a forum or a chat. The research focuses on unveiling the adaptation strategies employed by a communicant in particular cases and looks into the reasons they are employed. There is a correlation between language faculty of the communicants and the strategies they opt for when communicating on Internet forums and in chat rooms. The research included an experiment with a sample of Russian-English asymmetric bilinguals aged 16-25. Respondents were given two texts of equivalent contents, but of different language complexity. They had to respond to the texts as if they were making a reciprocal comment at a forum. It has been revealed that when communicants realize that their language faculty is not sufficient to understand the initial text they tend to amend their communication strategy in order to maintain the balance with the niche (remain involved in the communication). Most common strategies for responding to a difficult-to-understand text were self-presentation, veiling poor language faculty and response evasion. The research has so far focused on a very narrow aspect of correlation between language faculty and communication behavior, namely the syntactic and lexicological complexity of initial texts. It is essential to conduct a series of experiments that dwell on other characteristics of the texts to determine the range of cases when language faculty determines the choice of adaptation strategy.

Keywords: adaptation, communication strategies, internet communication, verbal interaction, autopoiesis theory

Procedia PDF Downloads 353
1370 Biomimicked Nano-Structured Coating Elaboration by Soft Chemistry Route for Self-Cleaning and Antibacterial Uses

Authors: Elodie Niemiec, Philippe Champagne, Jean-Francois Blach, Philippe Moreau, Anthony Thuault, Arnaud Tricoteaux

Abstract:

Hygiene of equipment in contact with users is an important issue in the railroad industry. The numerous cleanings to eliminate bacteria and dirt cost a lot. Besides, mechanical solicitations on contact parts are observed daily. It should be interesting to elaborate on a self-cleaning and antibacterial coating with sufficient adhesion and good resistance against mechanical and chemical solicitations. Thus, a Hauts-de-France and Maubeuge Val-de-Sambre conurbation authority co-financed Ph.D. thesis has been set up since October 2017 based on anterior studies carried by the Laboratory of Ceramic Materials and Processing. To accomplish this task, a soft chemical route has been implemented to bring a lotus effect on metallic substrates. It involves nanometric liquid zinc oxide synthesis under 100°C. The originality here consists in a variation of surface texturing by modification of the synthesis time of the species in solution. This helps to adjust wettability. Nanostructured zinc oxide has been chosen because of the inherent photocatalytic effect, which can activate organic substance degradation. Two methods of heating have been compared: conventional and microwave assistance. Tested subtracts are made of stainless steel to conform to transport uses. Substrate preparation was the first step of this protocol: a meticulous cleaning of the samples is applied. The main goal of the elaboration protocol is to fix enough zinc-based seeds to make them grow during the next step as desired (nanorod shaped). To improve this adhesion, a silica gel has been formulated and optimized to ensure chemical bonding between substrate and zinc seeds. The last step consists of deposing a wide carbonated organosilane to improve the superhydrophobic property of the coating. The quasi-proportionality between the reaction time and the nanorod length will be demonstrated. Water Contact (superior to 150°) and Roll-off Angle at different steps of the process will be presented. The antibacterial effect has been proved with Escherichia Coli, Staphylococcus Aureus, and Bacillus Subtilis. The mortality rate is found to be four times superior to a non-treated substrate. Photocatalytic experiences were carried out from different dyed solutions in contact with treated samples under UV irradiation. Spectroscopic measurements allow to determinate times of degradation according to the zinc quantity available on the surface. The final coating obtained is, therefore, not a monolayer but rather a set of amorphous/crystalline/amorphous layers that have been characterized by spectroscopic ellipsometry. We will show that the thickness of the nanostructured oxide layer depends essentially on the synthesis time set in the hydrothermal growth step. A green, easy-to-process and control coating with self-cleaning and antibacterial properties has been synthesized with a satisfying surface structuration.

Keywords: antibacterial, biomimetism, soft-chemistry, zinc oxide

Procedia PDF Downloads 134
1369 Ethanol Precipitation and Characterization of L-Asparaginase from Aspergillus oryzae

Authors: L. L. Tundisi, A. Pessoa Jr., E. B. Tambourgi, E. Silveira, P. G. Mazzola

Abstract:

L-asparaginase (L-ASNase) is the gold standard treatment for acute lymphoblastic leukemia that mainly affects pediatric patients; treatment increases survival from 20% to 90%. The characterization of other L-Asparaginases, apart from the most used from Escherichia coli and Erwinia chrysanthemi, has been reported, but the choice of the most appropriate is still under debate. This choice should be based on its pharmacokinetics, immune hypersensitivity, doses, prices, pharmacodynamics. The main factors influencing the antileukemic activity of ASNase are enzymatic activity, Km, glutaminase activity, clearance of the enzyme and development of resistance. However, most of the commercialized enzyme present an intrinsic glutaminase activity, which is responsible for some side effects. In this study, glutaminase free asparaginase produced from Aspergillus oryzae was precipitated in different percentages of ethanol (0–80%), until optimum ethanol concentration of 60% (w/w) was found. Following, precipitation of crude L-ASNase was performed in a single step, using 60% (w/w) ethanol, under constant agitation and temperature. It presented activity of 135.45 U/mg and after gel filtration chromatography with Sephadex G-the enzymatic activity was 322.02 U/mg. The apparent molecular mass of the purified L-ASNase fraction was estimated by 10% SDS-PAGE. Proteins were stained with Coomassie Brilliant Blue R-250. The molar mass range was from 10 kDa to 250 kDa. L-ASNase from Aspergillus oryzae was characterized aiming possible therapeutic use. Four different buffers (phosphate-citrate buffer pH 2.6 to 5.8; phosphate buffer pH 5.8 to 7.4; Tris - HCl pH 7.4 to 9.0; and carbonate buffer pH 9.8 to 10.6) were used to measure the optimum pH for L-ASNase activity. The optimum temperature for enzyme activity was measured at optimal pH conditions (Tris-HCl and phosphate buffer, pH 7.4) at different temperatures ranging from 5 to 55°C. All activities were calculated by quantifying the free ammonia, using the Nessler reagent. The kinetic parameters calculation, e.g. Michaelis-Menten constant (Km), maximum velocity (Vmax) and Hills coefficient (n), were performed by incubating the enzyme in different concentrations of the substrate at optimum conditions of pH and fitted on Hill’s equation. This glutaminase free asparaginase showed a low Km (3.39 mM and 3.81 mM) and enzymatic activity of 135.45 U/mg after precipitation with ethanol. After gel filtration chromatography it rose to 322.02 U/mg. Optimum activity was found between pH 5.8 - 9.0, best activity results with phosphate buffer pH 7.4 and Tris-HCl pH 7.4 and showed activity from 5°C to 55°C. These results indicate that L-ASNase from A. oryzae has the potential for human use.

Keywords: biopharmaceuticals, bioprocessing, bioproducts, biotechnology, enzyme activity, ethanol precipitation

Procedia PDF Downloads 280
1368 Sediment Transport Monitoring in the Port of Veracruz Expansion Project

Authors: Francisco Liaño-Carrera, José Isaac Ramírez-Macías, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga, Marcos Rangel-Avalos, Adriana Andrea Roldán-Ubando

Abstract:

The construction of most coastal infrastructure developments around the world are usually made considering wave height, current velocities and river discharges; however, little effort has been paid to surveying sediment transport during dredging or the modification to currents outside the ports or marinas during and after the construction. This study shows a complete survey during the construction of one of the largest ports of the Gulf of Mexico. An anchored Acoustic Doppler Current Velocity profiler (ADCP), a towed ADCP and a combination of model outputs were used at the Veracruz port construction in order to describe the hourly sediment transport and current modifications in and out of the new port. Owing to the stability of the system the new port was construction inside Vergara Bay, a low wave energy system with a tidal range of up to 0.40 m. The results show a two-current system pattern within the bay. The north side of the bay has an anticyclonic gyre, while the southern part of the bay shows a cyclonic gyre. Sediment transport trajectories were made every hour using the anchored ADCP, a numerical model and the weekly data obtained from the towed ADCP within the entire bay. The sediment transport trajectories were carefully tracked since the bay is surrounded by coral reef structures which are sensitive to sedimentation rate and water turbidity. The survey shows that during dredging and rock input used to build the wave breaker sediments were locally added (< 2500 m2) and local currents disperse it in less than 4 h. While the river input located in the middle of the bay and the sewer system plant may add more than 10 times this amount during a rainy day or during the tourist season. Finally, the coastal line obtained seasonally with a drone suggests that the southern part of the bay has not been modified by the construction of the new port located in the northern part of the bay, owing to the two subsystem division of the bay.

Keywords: Acoustic Doppler Current Profiler, construction around coral reefs, dredging, port construction, sediment transport monitoring,

Procedia PDF Downloads 220
1367 The Effectiveness of Kinesio Taping in Enhancing Early Post-Operative Outcomes Inpatients after Total Knee Replacement or Anterior Cruciate Ligament Reconstruction

Authors: B. A. Alwahaby

Abstract:

Background: The number of Total Knee Replacement (TKR) and Anterior Cruciate Ligament Reconstruction (ACLR) performed every year is increasing. The main aim of physiotherapy early recovery rehabilitation after these surgeries is to control pain and edema and regain Range of Motion (ROM) and physical activity. All of these outcomes need to be managed by safe and effective modalities. Kinesiotaping (KT) is an elastic non-invasive therapeutic tape that has become recognised in different physiotherapy situation as injury prevention, rehabilitation, and performance enhancement and been used with different conditions. However, there is still clinical doubt regarding the effectiveness of KT due to inconclusive supporting evidence. The aim of this systematic review is to collate all the available evidence on the effectiveness of KT in the early rehabilitation of ACLR and TKR patients and analyse whether the use of KT combined with standard rehabilitation would facilitate recovery of postoperative outcome than standard rehabilitation alone. Methodology: A systematic review was conducted. Medline, EMBASE, Scopus, AMED PEDro, CINAHL, and Web of Science databases were searched. Each study was assessed for inclusion and methodological quality appraisal was undertaken by two reviewers using the JBI critical appraisal tools. The studies were then synthesised qualitatively due to heterogeneity between studies. Results: Five moderate to low quality RCTs were located. All five studies demonstrated statistically significant improvements in pain, swelling, ROM, and functional outcomes (p < 0.05). Between group comparison, KT combined with standardised rehabilitation were shown to be significantly more effective than standardised rehabilitation alone for pain and swelling (p < 0.05). However, there were inconstant findings for ROM, and no statistically significant differences reported between groups for functional outcomes (p > 0.05). Conclusion: Research in the area is generally low quality; however, there is consistent evidence to support the use of KT combined with standardised post-operative rehabilitation for reducing pain and swelling. There is also some evidence that KT may be effective in combination with standardised rehabilitation to regain knee extension ROM faster than standardised rehabilitation alone, but further primary research is required to confirm this.

Keywords: anterior cruciate ligament reconstruction, ACLR, kinesio taping, KT, postoperative, total knee replacement, TKR

Procedia PDF Downloads 112
1366 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves

Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada

Abstract:

Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.

Keywords: renewable energy, oscillating water column, multi-criteria selection, Wells turbine

Procedia PDF Downloads 154
1365 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 164
1364 A Flipped Learning Experience in an Introductory Course of Information and Communication Technology in Two Bachelor's Degrees: Combining the Best of Online and Face-to-Face Teaching

Authors: Begona del Pino, Beatriz Prieto, Alberto Prieto

Abstract:

Two opposite approaches to teaching can be considered: in-class learning (teacher-oriented) versus virtual learning (student-oriented). The most known example of the latter is Massive Online Open Courses (MOOCs). Both methodologies have pros and cons. Nowadays there is an increasing trend towards combining both of them. Blending learning is considered a valuable tool for improving learning since it combines student-centred interactive e-learning and face to face instruction. The aim of this contribution is to exchange and share the experience and research results of a blended-learning project that took place in the University of Granada (Spain). The research objective was to prove how combining didactic resources of a MOOC with in-class teaching, interacting directly with students, can substantially improve academic results, as well as student acceptance. The proposed methodology is based on the use of flipped learning technics applied to the subject ‘Fundamentals of Computer Science’ of the first course of two degrees: Telecommunications Engineering, and Industrial Electronics. In this proposal, students acquire the theoretical knowledges at home through a MOOC platform, where they watch video-lectures, do self-evaluation tests, and use other academic multimedia online resources. Afterwards, they have to attend to in-class teaching where they do other activities in order to interact with teachers and the rest of students (discussing of the videos, solving of doubts and practical exercises, etc.), trying to overcome the disadvantages of self-regulated learning. The results are obtained through the grades of the students and their assessment of the blended experience, based on an opinion survey conducted at the end of the course. The major findings of the study are the following: The percentage of students passing the subject has grown from 53% (average from 2011 to 2014 using traditional learning methodology) to 76% (average from 2015 to 2018 using blended methodology). The average grade has improved from 5.20±1.99 to 6.38±1.66. The results of the opinion survey indicate that most students preferred blended methodology to traditional approaches, and positively valued both courses. In fact, 69% of students felt ‘quite’ or ‘very’ satisfied with the classroom activities; 65% of students preferred the flipped classroom methodology to traditional in-class lectures, and finally, 79% said they were ‘quite’ or ‘very’ satisfied with the course in general. The main conclusions of the experience are the improvement in academic results, as well as the highly satisfactory assessments obtained in the opinion surveys. The results confirm the huge potential of combining MOOCs in formal undergraduate studies with on-campus learning activities. Nevertheless, the results in terms of students’ participation and follow-up have a wide margin for improvement. The method is highly demanding for both students and teachers. As a recommendation, students must perform the assigned tasks with perseverance, every week, in order to take advantage of the face-to-face classes. This perseverance is precisely what needs to be promoted among students because it clearly brings about an improvement in learning.

Keywords: blended learning, educational paradigm, flipped classroom, flipped learning technologies, lessons learned, massive online open course, MOOC, teacher roles through technology

Procedia PDF Downloads 174
1363 Epidemiology of Hepatitis B and Hepatitis C Viruses Among Pregnant Women at Queen Elizabeth Central Hospital, Malawi

Authors: Charles Bijjah Nkhata, Memory Nekati Mvula, Milton Masautso Kalongonda, Martha Masamba, Isaac Thom Shawa

Abstract:

Viral Hepatitis is a serious public health concern globally with deaths estimated at 1.4 million annually due to liver fibrosis, cirrhosis, and hepatocellular carcinoma. Hepatitis B and C are the most common viruses that cause liver damage. However, the majority of infected individuals are unaware of their serostatus. Viral Hepatitis has contributed to maternal and neonatal morbidity and mortality. There is no updated data on the Epidemiology of hepatitis B and C among pregnant mothers in Malawi. To assess the epidemiology of Hepatitis B and C viruses among pregnant women at Queen Elizabeth Central Hospital. Specific Objectives • To determine sero-prevalence of HBsAg and Anti-HCV in pregnant women at QECH. • To investigate risk factors associated with HBV and HCV infection in pregnant women. • To determine the distribution of HBsAg and Anti-HCV infection among pregnant women of different age group. A descriptive cross-sectional study was conducted among pregnant women at QECH in last quarter of 2021. Of the 114 pregnant women, 96 participants were consented and enrolled using a convenient sampling technique. 12 participants were dropped due to various reasons; therefore 84 completed the study. A semi-structured questionnaire was used to collect socio-demographic and behavior characteristics to assess the risk of exposure. Serum was processed from venous blood samples and tested for HBsAg and Anti-HCV markers utilizing Rapid screening assays for screening and Enzyme Linked Immunosorbent Assay for confirmatory. A total of 84 pregnant consenting pregnant women participated in the study, with 1.2% (n=1/84) testing positive for HBsAg and nobody had detectable anti-HCV antibodies. There was no significant link between HBV and HCV in any of the socio-demographic data or putative risk variables. The findings indicate a viral hepatitis prevalence lower than the set range by the WHO. This suggests that HBV and HCV are rare in pregnant women at QECH. Nevertheless, accessible screening for all pregnant women should be provided. The prevention of MTCT is key for reduction and prevention of the global burden of chronic viral Hepatitis.

Keywords: viral hepatitis, hepatitis B, hepatitis C, pregnancy, malawi, liver disease, mother to child transmission

Procedia PDF Downloads 161
1362 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data

Authors: Sara Bonetti

Abstract:

The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.

Keywords: data literacy, early childhood professionals, intersectionality, quantitative data

Procedia PDF Downloads 244
1361 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level

Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown

Abstract:

‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.

Keywords: data integration, data linkage, health planning, health services research

Procedia PDF Downloads 212
1360 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 256
1359 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.

Keywords: contaminated sediment, monitoring, recovery, remediation

Procedia PDF Downloads 231
1358 The Re-Emergence of Russia Foreign Policy (Case Study: Middle East)

Authors: Maryam Azish

Abstract:

Russia, as an emerging global player in recent years, has projected a special place in the Middle East. Despite all the challenges it has faced over the years, it has always considered its presence in various fields with a strategy that has defined its maneuvering power as a level of competition and even confrontation with the United States. Therefore, its current approach is considered important as an influential actor in the Middle East. After the collapse of the Soviet Union, when the Russians withdrew completely from the Middle East, the American scene remained almost unrivaled by the Americans. With the start of the US-led war in Iraq and Afghanistan and the subsequent developments that led to the US military and political defeat, a new chapter in regional security was created in which ISIL and Taliban terrorism went along with the Arab Spring to destabilize the Middle East. Because of this, the Americans took every opportunity to strengthen their military presence. Iraq, Syria and Afghanistan have always been the three areas where terrorism was shaped, and the countries of the region have each reacted to this evil phenomenon accordingly. The West dealt with this phenomenon on a case-by-case basis in the general circumstances that created the fluid situation in the Arab countries and the region. Russian President Vladimir Putin accused the US of falling asleep in the face of ISIS and terrorism in Syria. In fact, this was an opportunity for the Russians to revive their presence in Syria. This article suggests that utilizing the recognition policy along with the constructivism theory will offer a better knowledge of Russia’s endeavors to endorse its international position. Accordingly, Russia’s distinctiveness and its ambitions for a situation of great power have played a vital role in shaping national interests and, subsequently, in foreign policy, in Putin's era in particular. The focal claim of the paper is that scrutinize Russia’s foreign policy with realistic methods cannot be attained. Consequently, with an aim to fill the prevailing vacuum, this study exploits the politics of acknowledgment in the context of constructivism to examine Russia’s foreign policy in the Middle East. The results of this paper show that the key aim of Russian foreign policy discourse, accompanied by increasing power and wealth, is to recognize and reinstate the position of great power in the universal system. The Syrian crisis has created an opportunity for Russia to unite its position in the developing global and regional order after ages of dynamic and prevalent existence in the Middle East as well as contradicting US unilateralism. In the meantime, the writer thinks that the question of identifying Russia’s position in the global system by the West has played a foremost role in serving its national interests.

Keywords: constructivism, foreign Policy, middle East, Russia, regionalism

Procedia PDF Downloads 137
1357 Nonlinear Evolution of the Pulses of Elastic Waves in Geological Materials

Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas

Abstract:

Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus ‘GEOSCAN-02M’. Ultrasonic pulses are excited by the pulses of Q-switched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach-Mayergoyz and can be used for the location of cracks in the optically opaque materials.

Keywords: cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock

Procedia PDF Downloads 341