Search results for: age-sex accuracy index
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6849

Search results for: age-sex accuracy index

399 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India

Authors: Rituparna Pal, Faiz Ahmed

Abstract:

Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.

Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters

Procedia PDF Downloads 155
398 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 138
397 Triple Case Phantom Tumor of Lungs

Authors: Angelis P. Barlampas

Abstract:

Introduction: The term phantom lung mass describes the ovoid collection of fluid within the interlobular fissure, which initially creates the impression of a mass. The problem of correct differential diagnosis is great, especially in plain radiography. A case is presented with three nodular pulmonary foci, the shape, location, and density of which, as well as the presence of chronic loculated pleural effusions, suggest the presence of multiple phantom tumors of the lung. Purpose: The aim of this paper is to draw the attention of non-experienced and non-specialized physicians to the existence of benign findings that mimic pathological conditions and vice versa. The careful study of a radiological examination and the comparison with previous exams or further control protect against quick wrong conclusions. Methods: A hospitalized patient underwent a non-contrast CT scan of the chest as part of the general control of her situation. Results: Computed tomography revealed pleural effusions, some of them loculated, increased cardiothoracic index, as well as the presence of three nodular foci, one in the left lung and two in the right with a maximum density of up to 18 Hounsfield units and a mean diameter of approximately five centimeters. Two of them are located in the characteristical anatomical position of the major interlobular fissure. The third one is located in the area of the right lower lobe’s posterior basal part, and it presents the same characteristics as the previous ones and is likely to be a loculated fluid collection, within an auxiliary interlobular fissure or a cyst, in the context of the patient's more general pleural entrapments and loculations. The differential diagnosis of nodular foci based on their imaging characteristics includes the following: a) rare metastatic foci with low density (liposarcoma, mucous tumors of the digestive or genital system, necrotic metastatic foci, metastatic renal cancer, etc.), b) necrotic multiple primary lung tumor locations (squamous epithelial cancer, etc. ), c) hamartomas of the lung, d) fibrotic tumors of the interlobular fissures, e) lipoid pneumonia, f) fluid concentrations within the interlobular fissures, g) lipoma of the lung, h) myelolipomas of the lung. Conclusions: The collection of fluid within the interlobular fissure of the lung can give the false impression of a lung mass, particularly on plain chest radiography. In the case of computed tomography, the ability to measure the density of a lesion, combined with the provided high anatomical details of the location and characteristics of the lesion, can lead relatively easily to the correct diagnosis. In cases of doubt or image artifacts, comparison with previous or subsequent examinations can resolve any disagreements, while in rare cases, intravenous contrast may be necessary.

Keywords: phantom mass, chest CT, pleural effusion, cancer

Procedia PDF Downloads 34
396 Basics of Gamma Ray Burst and Its Afterglow

Authors: Swapnil Kumar Singh

Abstract:

Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.

Keywords: GRB, synchrotron, X-ray, isotropic energy

Procedia PDF Downloads 71
395 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 83
394 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements

Authors: Mohammad R. Bhuyan, Mohammad J. Khattak

Abstract:

Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.

Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement

Procedia PDF Downloads 143
393 Marketing in the Fashion Industry and Its Critical Success Factors: The Case of Fashion Dealers in Ghana

Authors: Kumalbeo Paul Kamani

Abstract:

Marketing plays a very important role in the success of any firm since it represents the means through which a firm can reach its customers and also promotes its products and services. In fact, marketing aids the firm in identifying customers who the business can competitively serve, and tailoring product offerings, prices, distribution, promotional efforts, and services towards those customers. Unfortunately, in many firms, marketing has been reduced to merely advertisement. For effective marketing, firms must go beyond this often-limited function of advertisement. In the fashion industry in particular, marketing faces challenges due to its peculiar characteristics. Previous research for instance affirms the idiosyncrasy and peculiarities that differentiate the fashion industry from other industrial areas. It has been documented that the fashion industry is characterized seasonal intensity, short product life cycles, the difficulty of competitive differentiation, and long time for companies to reach financial stability. These factors are noted to pose obstacles to the fashion entrepreneur’s endeavours and can be the reasons that explain their low survival rates. In recent times, the fashion industry has been described as a market that is accessible market, has low entry barriers, both in terms of needed capital and skills which have all accounted for the burgeoning nature of startups. Yet as already stated, marketing is particularly challenging in the industry. In particular, areas such as marketing, branding, growth, project planning, financial and relationship management might represent challenges for the fashion entrepreneur but that have not been properly addressed by previous research. It is therefore important to assess marketing strategies of fashion firms and the factors influencing their success. This study generally sought to examine marketing strategies of fashion dealers in Ghana and their critical success factors. The study employed the quantitative survey research approach. A total of 120 fashion dealers were sampled. Questionnaires were used as instrument of data collection. Data collected was analysed using quantitative techniques including descriptive statistics and Relative Importance Index. The study revealed that the marketing strategies used by fashion apparels are text messages using mobile phones, referrals, social media marketing, and direct marketing. Results again show that the factors influencing fashion marketing effectiveness are strategic management, marketing mix (product, price, promotion etc), branding and business development. Policy implications are finally outlined. The study recommends among others that there is a need for the top management executive to craft and adopt marketing strategies that enable that are compatible with the fashion trends and the needs of the customers. This will improve customer satisfaction and hence boost market penetration. The study further recommends that the fashion industry in Ghana should seek to ensure that fashion apparels accommodate the diversity and the cultural setting of different customers to meet their unique needs.

Keywords: marketing, fashion, industry, success factors

Procedia PDF Downloads 19
392 Technology Management for Early Stage Technologies

Authors: Ming Zhou, Taeho Park

Abstract:

Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.

Keywords: technology management, early stage technology, patent, decision

Procedia PDF Downloads 323
391 Adaptability in Older People: A Mixed Methods Approach

Authors: V. Moser-Siegmeth, M. C. Gambal, M. Jelovcak, B. Prytek, I. Swietalsky, D. Würzl, C. Fida, V. Mühlegger

Abstract:

Adaptability is the capacity to adjust without great difficulty to changing circumstances. Within our project, we aimed to detect whether older people living within a long-term care hospital lose the ability to adapt. Theoretical concepts are contradictory in their statements. There is also lack of evidence in the literature how the adaptability of older people changes over the time. Following research questions were generated: Are older residents of a long-term care facility able to adapt to changes within their daily routine? How long does it take for older people to adapt? The study was designed as a convergent parallel mixed method intervention study, carried out within a four-month period and took place within seven wards of a long-term care hospital. As a planned intervention, a change of meal-times was established. The inhabitants were surveyed with qualitative interviews and quantitative questionnaires and diaries before, during and after the intervention. In addition, a survey of the nursing staff was carried out in order to detect changes of the people they care for and how long it took them to adapt. Quantitative data was analysed with SPSS, qualitative data with a summarizing content analysis. The average age of the involved residents was 82 years, the average length of stay 45 months. The adaptation to new situations does not cause problems for older residents. 47% of the residents state that their everyday life has not changed by changing the meal times. 24% indicate ‘neither nor’ and only 18% respond that their daily life has changed considerably due to the changeover. The diaries of the residents, which were conducted over the entire period of investigation showed no changes with regard to increased or reduced activity. With regard to sleep quality, assessed with the Pittsburgh sleep quality index, there is little change in sleep behaviour compared to the two survey periods (pre-phase to follow-up phase) in the cross-table. The subjective sleep quality of the residents is not affected. The nursing staff points out that, with good information in advance, changes are not a problem. The ability to adapt to changes does not deteriorate with age or by moving into a long-term care facility. It only takes a few days to get used to new situations. This can be confirmed by the nursing staff. Although there are different determinants like the health status that might make an adjustment to new situations more difficult. In connection with the limitations, the small sample size of the quantitative data collection must be emphasized. Furthermore, the extent to which the quantitative and qualitative sample represents the total population, since only residents without cognitive impairments of selected units participated. The majority of the residents has cognitive impairments. It is important to discuss whether and how well the diary method is suitable for older people to examine their daily structure.

Keywords: adaptability, intervention study, mixed methods, nursing home residents

Procedia PDF Downloads 126
390 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 98
389 Challenges of Carbon Trading Schemes in Africa

Authors: Bengan Simbarashe Manwere

Abstract:

The entire African continent, comprising 55 countries, holds a 2% share of the global carbon market. The World Bank attributes the continent’s insignificant share and participation in the carbon market to the limited access to electricity. Approximately 800 million people spread across 47 African countries generate as much power as Spain, with a population of 45million. Only South Africa and North Africa have carbon-reduction investment opportunities on the continent and dominate the 2% market share of the global carbon market. On the back of the 2015 Paris Agreement, South Africa signed into law the Carbon Tax Act 15 of 2019 and the Customs and Excise Amendment Act 13 of 2019 (Gazette No. 4280) on 1 June 2019. By these laws, South Africa was ushered into the league of active global carbon market players. By increasing the cost of production by the rate of R120/tCO2e, the tax intentionally compels the internalization of pollution as a cost of production and, relatedly, stimulate investment in clean technologies. The first phase covered the 1 June 2019 – 31 December 2022 period during which the tax was meant to escalate at CPI + 2% for Scope 1 emitters. However, in the second phase, which stretches from 2023 to 2030, the tax will escalate at the inflation rate only as measured by the consumer price index (CPI). The Carbon Tax Act provides for carbon allowances as mitigation strategies to limit agents’ carbon tax liability by up to 95% for fugitive and process emissions. Although the June 2019 Carbon Tax Act explicitly makes provision for a carbon trading scheme (CTS), the carbon trading regulations thereof were only finalised in December 2020. This points to a delay in the establishment of a carbon trading scheme (CTS). Relatedly, emitters in South Africa are not able to benefit from the 95% reduction in effective carbon tax rate from R120/tCO2e to R6/tCO2e as the Johannesburg Stock Exchange (JSE) has not yet finalized the establishment of the market for trading carbon credits. Whereas most carbon trading schemes have been designed and constructed from the beginning as new tailor-made systems in countries the likes of France, Australia, Romania which treat carbon as a financial product, South Africa intends, on the contrary, to leverage existing trading infrastructure of the Johannesburg Stock Exchange (JSE) and the Clearing and Settlement platforms of Strate, among others, in the interest of the Paris Agreement timelines. Therefore the carbon trading scheme will not be constructed from scratch. At the same time, carbon will be treated as a commodity in order to align with the existing institutional and infrastructural capacity. This explains why the Carbon Tax Act is silent about the involvement of the Financial Sector Conduct Authority (FSCA).For South Africa, there is need to establish they equilibrium stability of the CTS. This is important as South Africa is an innovator in carbon trading and the successful trading of carbon credits on the JSE will lead to imitation by early adopters first, followed by the middle majority thereafter.

Keywords: carbon trading scheme (CTS), Johannesburg stock exchange (JSE), carbon tax act 15 of 2019, South Africa

Procedia PDF Downloads 40
388 Measuring Oxygen Transfer Coefficients in Multiphase Bioprocesses: The Challenges and the Solution

Authors: Peter G. Hollis, Kim G. Clarke

Abstract:

Accurate quantification of the overall volumetric oxygen transfer coefficient (KLa) is ubiquitously measured in bioprocesses by analysing the response of dissolved oxygen (DO) to a step change in the oxygen partial pressure in the sparge gas using a DO probe. Typically, the response lag (τ) of the probe has been ignored in the calculation of KLa when τ is less than the reciprocal KLa, failing which a constant τ has invariably been assumed. These conventions have now been reassessed in the context of multiphase bioprocesses, such as a hydrocarbon-based system. Here, significant variation of τ in response to changes in process conditions has been documented. Experiments were conducted in a 5 L baffled stirred tank bioreactor (New Brunswick) in a simulated hydrocarbon-based bioprocess comprising a C14-20 alkane-aqueous dispersion with suspended non-viable Saccharomyces cerevisiae solids. DO was measured with a polarographic DO probe fitted with a Teflon membrane (Mettler Toledo). The DO concentration response to a step change in the sparge gas oxygen partial pressure was recorded, from which KLa was calculated using a first order model (without incorporation of τ) and a second order model (incorporating τ). τ was determined as the time taken to reach 63.2% of the saturation DO after the probe was transferred from a nitrogen saturated vessel to an oxygen saturated bioreactor and is represented as the inverse of the probe constant (KP). The relative effects of the process parameters on KP were quantified using a central composite design with factor levels typical of hydrocarbon bioprocesses, namely 1-10 g/L yeast, 2-20 vol% alkane and 450-1000 rpm. A response surface was fitted to the empirical data, while ANOVA was used to determine the significance of the effects with a 95% confidence interval. KP varied with changes in the system parameters with the impact of solid loading statistically significant at the 95% confidence level. Increased solid loading reduced KP consistently, an effect which was magnified at high alkane concentrations, with a minimum KP of 0.024 s-1 observed at the highest solids loading of 10 g/L. This KP was 2.8 fold lower that the maximum of 0.0661 s-1 recorded at 1 g/L solids, demonstrating a substantial increase in τ from 15.1 s to 41.6 s as a result of differing process conditions. Importantly, exclusion of KP in the calculation of KLa was shown to under-predict KLa for all process conditions, with an error up to 50% at the highest KLa values. Accurate quantification of KLa, and therefore KP, has far-reaching impact on industrial bioprocesses to ensure these systems are not transport limited during scale-up and operation. This study has shown the incorporation of τ to be essential to ensure KLa measurement accuracy in multiphase bioprocesses. Moreover, since τ has been conclusively shown to vary significantly with process conditions, it has also been shown that it is essential for τ to be determined individually for each set of process conditions.

Keywords: effect of process conditions, measuring oxygen transfer coefficients, multiphase bioprocesses, oxygen probe response lag

Procedia PDF Downloads 247
387 Perception of Tactile Stimuli in Children with Autism Spectrum Disorder

Authors: Kseniya Gladun

Abstract:

Tactile stimulation of a dorsal side of the wrist can have a strong impact on our attitude toward physical objects such as pleasant and unpleasant impact. This study explored different aspects of tactile perception to investigate atypical touch sensitivity in children with autism spectrum disorder (ASD). This study included 40 children with ASD and 40 healthy children aged 5 to 9 years. We recorded rsEEG (sampling rate of 250 Hz) during 20 min using EEG amplifier “Encephalan” (Medicom MTD, Taganrog, Russian Federation) with 19 AgCl electrodes placed according to the International 10–20 System. The electrodes placed on the left, and right mastoids served as joint references under unipolar montage. The registration of EEG v19 assignments was carried out: frontal (Fp1-Fp2; F3-F4), temporal anterior (T3-T4), temporal posterior (T5-T6), parietal (P3-P4), occipital (O1-O2). Subjects were passively touched by 4 types of tactile stimuli on the left wrist. Our stimuli were presented with a velocity of about 3–5 cm per sec. The stimuli materials and procedure were chosen for being the most "pleasant," "rough," "prickly" and "recognizable". Type of tactile stimulation: Soft cosmetic brush - "pleasant" , Rough shoe brush - "rough", Wartenberg pin wheel roller - "prickly", and the cognitive tactile stimulation included letters by finger (most of the patient’s name ) "recognizable". To designate the moments of the stimuli onset-offset, we marked the moment when the moment of the touch began and ended; the stimulation was manual, and synchronization was not precise enough for event-related measures. EEG epochs were cleaned from eye movements by ICA-based algorithm in EEGLAB plugin for MatLab 7.11.0 (Mathwork Inc.). Muscle artifacts were cut out by manual data inspection. The response to tactile stimuli was significantly different in the group of children with ASD and healthy children, which was also depended on type of tactile stimuli and the severity of ASD. Amplitude of Alpha rhythm increased in parietal region to response for only pleasant stimulus, for another type of stimulus ("rough," "thorny", "recognizable") distinction of amplitude was not observed. Correlation dimension D2 was higher in healthy children compared to children with ASD (main effect ANOVA). In ASD group D2 was lower for pleasant and unpleasant compared to the background in the right parietal area. Hilbert transform changes in the frequency of the theta rhythm found only for a rough tactile stimulation compared with healthy participants only in the right parietal area. Children with autism spectrum disorders and healthy children were responded to tactile stimulation differently with specific frequency distribution alpha and theta band in the right parietal area. Thus, our data supports the hypothesis that rsEEG may serve as a sensitive index of altered neural activity caused by ASD. Children with autism have difficulty in distinguishing the emotional stimuli ("pleasant," "rough," "prickly" and "recognizable").

Keywords: autism, tactile stimulation, Hilbert transform, pediatric electroencephalography

Procedia PDF Downloads 229
386 Frailty and Quality of Life among Older Adults: A Study of Six LMICs Using SAGE Data

Authors: Mamta Jat

Abstract:

Background: The increased longevity has resulted in the increase in the percentage of the global population aged 60 years or over. With this “demographic transition” towards ageing, “epidemiologic transition” is also taking place characterised by growing share of non-communicable diseases in the overall disease burden. So, many of the older adults are ageing with chronic disease and high levels of frailty which often results in lower levels of quality of life. Although frailty may be increasingly common in older adults, prevention or, at least, delay the onset of late-life adverse health outcomes and disability is necessary to maintain the health and functional status of the ageing population. This is an effort using SAGE data to assess levels of frailty and its socio-demographic correlates and its relation with quality of life in LMICs of India, China, Ghana, Mexico, Russia and South Africa in a comparative perspective. Methods: The data comes from multi-country Study on Global AGEing and Adult Health (SAGE), consists of nationally representative samples of older adults in six low and middle-income countries (LMICs): China, Ghana, India, Mexico, the Russian Federation and South Africa. For our study purpose, we will consider only 50+ year’s respondents. The logistic regression model has been used to assess the correlates of frailty. Multinomial logistic regression has been used to study the effect of frailty on QOL (quality of life), controlling for the effect of socio-economic and demographic correlates. Results: Among all the countries India is having highest mean frailty in males (0.22) and females (0.26) and China with the lowest mean frailty in males (0.12) and females (0.14). The odds of being frail are more likely with the increase in age across all the countries. In India, China and Russia the chances of frailty are more among rural older adults; whereas, in Ghana, South Africa and Mexico rural residence is protecting against frailty. Among all countries china has high percentage (71.46) of frail people in low QOL; whereas Mexico has lowest percentage (36.13) of frail people in low QOL.s The risk of having low and middle QOL is significantly (p<0.001) higher among frail elderly as compared to non–frail elderly across all countries with controlling socio-demographic correlates. Conclusion: Women and older age groups are having higher frailty levels than men and younger aged adults in LMICs. The mean frailty scores demonstrated a strong inverse relationship with education and income gradients, while lower levels of education and wealth are showing higher levels of frailty. These patterns are consistent across all LMICs. These data support a significant role of frailty with all other influences controlled, in having low QOL as measured by WHOQOL index. Future research needs to be built on this evolving concept of frailty in an effort to improve quality of life for frail elderly population, in LMICs setting.

Keywords: Keywords: Ageing, elderly, frailty, quality of life

Procedia PDF Downloads 262
385 Tertiary Level Teachers' Beliefs about Codeswitching

Authors: Hoa Pham

Abstract:

Code switching, which can be described as the use of students’ first language in second language classrooms, has long been a controversial topic in the area of language teaching and second language acquisition. While this has been widely investigated across different contexts, little empirical research has been undertaken in Vietnam. The findings of this study contribute to our understanding of bilingual discourse and code switching practices in content and language integrated classrooms, which has significant implications for language teaching and learning in general and in particular for language pedagogy at tertiary level in Vietnam. This study examines the accounts the teachers articulated for their code switching practices in content-based Business English in Vietnam. Data were collected from five teachers through the use of stimulated recall interviews facilitated by the video data to garner the teachers' cognitive reflection, and allowed them to vocalise the motivations behind their code switching behaviour in particular contexts. The literature has recommended that when participants are provided with a large amount of stimuli or cues, they will experience an original situation again in their imagination with great accuracy. This technique can also provide a valuable "insider" perspective on the phenomenon under investigation which complements the researcher’s "outsider" observation. This can create a relaxed atmosphere during the interview process, which in turn promotes the collection of rich and diverse data. Also, participants can be empowered by this technique as they can raise their own concerns and discuss instances which they find important or interesting. The data generated through this study were analysed using a constant comparative approach. The study found that the teachers indicated their support for the use of code switching in their pedagogical practices. Particularly, as a pedagogical resource, the teachers saw code switching to the L1 playing a key role in facilitating the students' comprehension of both content knowledge and the target language. They believed the use of the L1 accommodates the students' current language competence and content knowledge. They also expressed positive opinions about the role that code switching plays in stimulating students' schematic language and content knowledge, encouraging retention and interest in learning and promoting a positive affective environment in the classroom. The teachers perceived that their use of code switching to the L1 helps them meet the students' language needs and prepares them for their study in subsequent courses and addresses functional needs so that students can cope with English language use outside the classroom. Several factors shaped the teachers' perceptions of their code switching practices, including their accumulated teaching experience, their previous experience as language learners, their theoretical understanding of language teaching and learning, and their knowledge of the teaching context. Code switching was a typical phenomenon in the observed classes and was supported by the teachers in certain contexts. This study reinforces the call in the literature to recognise this practice as a useful instructional resource.

Keywords: codeswitching, language teaching, teacher beliefs, tertiary level

Procedia PDF Downloads 414
384 A Case Study of Remote Location Viewing, and Its Significance in Mobile Learning

Authors: James Gallagher, Phillip Benachour

Abstract:

As location aware mobile technologies become ever more omnipresent, the prospect of exploiting their context awareness to enforce learning approaches thrives. Utilizing the growing acceptance of ubiquitous computing, and the steady progress both in accuracy and battery usage of pervasive devices, we present a case study of remote location viewing, how the application can be utilized to support mobile learning in situ using an existing scenario. Through the case study we introduce a new innovative application: Mobipeek based around a request/response protocol for the viewing of a remote location and explore how this can apply both as part of a teacher lead activity and informal learning situations. The system developed allows a user to select a point on a map, and send a request. Users can attach messages alongside time and distance constraints. Users within the bounds of the request can respond with an image, and accompanying message, providing context to the response. This application can be used alongside a structured learning activity such as the use of mobile phone cameras outdoors as part of an interactive lesson. An example of a learning activity would be to collect photos in the wild about plants, vegetation, and foliage as part of a geography or environmental science lesson. Another example could be to take photos of architectural buildings and monuments as part of an architecture course. These images can be uploaded then displayed back in the classroom for students to share their experiences and compare their findings with their peers. This can help to fosters students’ active participation while helping students to understand lessons in a more interesting and effective way. Mobipeek could augment the student learning experience by providing further interaction with other peers in a remote location. The activity can be part of a wider study between schools in different areas of the country enabling the sharing and interaction between more participants. Remote location viewing can be used to access images in a specific location. The choice of location will depend on the activity and lesson. For example architectural buildings of a specific period can be shared between two or more cities. The augmentation of the learning experience can be manifested in the different contextual and cultural influences as well as the sharing of images from different locations. In addition to the implementation of Mobipeek, we strive to analyse this application, and a subset of other possible and further solutions targeted towards making learning more engaging. Consideration is given to the benefits of such a system, privacy concerns, and feasibility of widespread usage. We also propose elements of “gamification”, in an attempt to further the engagement derived from such a tool and encourage usage. We conclude by identifying limitations, both from a technical, and a mobile learning perspective.

Keywords: context aware, location aware, mobile learning, remote viewing

Procedia PDF Downloads 269
383 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack

Authors: Varun Agarwal

Abstract:

Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.

Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images

Procedia PDF Downloads 108
382 Isolation of Clitorin and Manghaslin from Carica papaya L. Leaves by CPC and Its Quantitative Analysis by QNMR

Authors: Norazlan Mohmad Misnan, Maizatul Hasyima Omar, Mohd Isa Wasiman

Abstract:

Papaya (Carica papaya L., Caricaceae) is a tree which mainly cultivated for its fruits in many tropical regions including Australia, Brazil, China, Hawaii, and Malaysia. Beside of fruits, its leaves, seeds, and latex have also been traditionally used for treating diseases, which also reported to possess anti-cancer and anti- malaria properties. Its leaves have been reported to consist of various chemical compounds such as alkaloids, flavonoids and phenolics. Clitorin and manghaslin are among major flavonoids presence. Thus, the aim of this study is to quantify the purity of these isolated compounds (clitorin and manghsalin) by using quantitative Nuclear Magnetic Resonance (qNMR) analysis. Only fresh C. papaya leaves were used for juice extraction procedure and subsequently was freeze-dried to obtain a dark green powdered form of the extract prior to Centrifugal Partition Chromatography (CPC) separation. The CPC experiments were performed using a two-phase solvent system comprising ethyl acetate/butanol/water (1:4:5, v/v/v/v) solvent. The upper organic phase was used as the stationary phase, and the lower aqueous phase was employed as the mobile phase. Ten fractions were obtained after an hour runtime analysis. Fraction 6 and fraction 8 has been identified as clitorin (m/z 739.21 [M-H]-) and manghaslin (m/z 755.21 [M-H]-), respectively, based on LCMS data and full analysis of NMR (1H NMR, 13C NMR, HMBC, and HSQC). The 1H-qNMR measurements were carried out using a 400 MHz NMR spectrometer (JEOL ECS 400MHz, Japan) and deuterated methanol was used as a solvent. Quantification was performed using the AQARI method (Accurate Quantitative NMR) with deuterated 1,4-Bis(trimethylsilyl)benzene (BTMSB) as an internal reference substances. This AQARI protocol includes not only NMR measurement but also sample preparation that provide highest precision and accuracy than other qNMR methods. The 90° pulse length and the T1 relaxation times for compounds and BTMSB were determined prior to the quantification to give the best signal-to-noise ratio. Regions containing the two downfield signals from aromatic part (6.00–6.89 ppm), and the singlet signal, (18H) arising from BTMSB (0.63-1.05ppm) were selected for integration. The purity of clitorin and manghaslin were calculated to be 52.22% and 43.36%, respectively. Further purification is needed in order to increase its purity. This finding has demonstrated the use of qNMR for quality control and standardization of various plant extracts and which can be applied for NMR fingerprinting of other plant-based products with good reproducibility and in the case where commercial standards is not readily available.

Keywords: Carica papaya, clitorin, manghaslin, quantitative Nuclear Magnetic Resonance, Centrifugal Partition Chromatography

Procedia PDF Downloads 467
381 Evaluation of Housing Quality in the Urban Fringes of Ibadan, Nigeria

Authors: Amao Funmilayo Lanrewaju

Abstract:

The study examined the socio-economic characteristics of the residents in selected urban fringes of Ibadan; identified and examined the housing and neighbourhood characteristics and evaluated housing quality in the study area. It analysed the relationship between the socio-economic characteristics of the residents, housing and neighbourhood characteristics as well as housing quality in the study area. This was with a view to providing information that would enhance the housing quality in urban fringes of Ibadan. Primary and secondary data were used for the study. A survey of eleven purposively selected communities from Oluyole and Egbeda local government areas in the urban fringes was conducted through a questionnaire administration and expert rating by five independent assessors (Qualified Architects) using penalty scoring within similar time-frames. The study employed a random sampling method to select a sample size of 480 houses representing 5% of the sampling frame of 9600 houses. Respondent in the first house was selected randomly and subsequently every 20th house in the streets involved was systematically selected for questionnaire administration, usually a household-head per building. The structured questionnaire elicited information on socio-economic characteristics of the residents, housing and neighbourhood characteristics, factors affecting housing quality and housing quality in the study area. Secondary data obtained for the study included the land-use plan of Ibadan from previous publications, housing demographics, population figures from relevant institutions and other published materials. The data collected were analysed using descriptive and inferential statistics such as frequency distribution, Cross tabulation, Correlation Analysis, Analysis of Variance (ANOVA) and Relative Importance Index (RII). The result of the survey revealed that respondents from the Yoruba ethnic group constituted the majority, comprising 439 (91.5%) of the 480 respondents from the two local government areas selected. It also revealed that the type of tenure status of majority of the respondents in the two local government areas was self-ownership (234, 48.8%), while 44.0% of the respondents acquired their houses through personal savings. Cross tabulation indicated that majority (67.1%, 322 out of 480) of the respondents were low-income earners. The study showed that both housing and neighbourhood services were not adequately provided across neighbourhoods in the study area. Correlation analysis indicated a significant relationship between respondents’ socio–economic status and their general housing quality (r=0.46; p-value of 0.01< 0.05). The ANOVA indicated that the relationship between socio-economic characteristics of the residents, housing and neighbourhood characteristics in the study area was significant (F=18.289, p=0.00; the coefficient of determination R2= 0.192). The findings from the study however revealed that there was no significant difference in the results obtained from users based evaluation and expert rating. The study concluded that housing quality in the urban fringes of Ibadan is generally poor and the socio-economic status of the residents significantly influenced the housing quality.

Keywords: housing quality, urban fringes, economic status, poverty

Procedia PDF Downloads 421
380 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 95
379 Design of a Low-Cost, Portable, Sensor Device for Longitudinal, At-Home Analysis of Gait and Balance

Authors: Claudia Norambuena, Myissa Weiss, Maria Ruiz Maya, Matthew Straley, Elijah Hammond, Benjamin Chesebrough, David Grow

Abstract:

The purpose of this project is to develop a low-cost, portable sensor device that can be used at home for long-term analysis of gait and balance abnormalities. One area of particular concern involves the asymmetries in movement and balance that can accompany certain types of injuries and/or the associated devices used in the repair and rehabilitation process (e.g. the use of splints and casts) which can often increase chances of falls and additional injuries. This device has the capacity to monitor a patient during the rehabilitation process after injury or operation, increasing the patient’s access to healthcare while decreasing the number of visits to the patient’s clinician. The sensor device may thereby improve the quality of the patient’s care, particularly in rural areas where access to the clinician could be limited, while simultaneously decreasing the overall cost associated with the patient’s care. The device consists of nine interconnected accelerometer/ gyroscope/compass chips (9-DOF IMU, Adafruit, New York, NY). The sensors attach to and are used to determine the orientation and acceleration of the patient’s lower abdomen, C7 vertebra (lower neck), L1 vertebra (middle back), anterior side of each thigh and tibia, and dorsal side of each foot. In addition, pressure sensors are embedded in shoe inserts with one sensor (ESS301, Tekscan, Boston, MA) beneath the heel and three sensors (Interlink 402, Interlink Electronics, Westlake Village, CA) beneath the metatarsal bones of each foot. These sensors measure the distribution of the weight applied to each foot as well as stride duration. A small microntroller (Arduino Mega, Arduino, Ivrea, Italy) is used to collect data from these sensors in a CSV file. MATLAB is then used to analyze the data and output the hip, knee, ankle, and trunk angles projected on the sagittal plane. An open-source program Processing is then used to generate an animation of the patient’s gait. The accuracy of the sensors was validated through comparison to goniometric measurements (±2° error). The sensor device was also shown to have sufficient sensitivity to observe various gait abnormalities. Several patients used the sensor device, and the data collected from each represented the patient’s movements. Further, the sensors were found to have the ability to observe gait abnormalities caused by the addition of a small amount of weight (4.5 - 9.1 kg) to one side of the patient. The user-friendly interface and portability of the sensor device will help to construct a bridge between patients and their clinicians with fewer necessary inpatient visits.

Keywords: biomedical sensing, gait analysis, outpatient, rehabilitation

Procedia PDF Downloads 259
378 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine

Authors: Sahar Heidary

Abstract:

Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.

Keywords: radiology, radiotherapy, medical imaging, cancer treatment

Procedia PDF Downloads 47
377 Cognitive Behaviour Hypnotherapy as an Effective Intervention for Nonsuicidal Self Injury Disorder

Authors: Halima Sadia Qureshi, Urooj Sadiq, Noshi Eram Zaman

Abstract:

The goal of this study was to see how cognitive behavior hypnotherapy affected nonsuicidal self-injury. DSM 5 invites the researchers to explore the newly added condition under the chapter of conditions under further study named Nonsuicidal self-injury disorder. To date, no empirical sound intervention has been proven effective for NSSI as given in DSM 5. Nonsuicidal self-injury is defined by DSM 5 as harming one's self physically, without suicidal intention. Around 7.6% of teenagers are expected to fulfill the NSSI disorder criteria. 3 Adolescents, particularly university students, account for around 87 percent of self-harm studies. Furthermore, one of the risks associated with NSSI is an increased chance of suicide attempts, and in most cases, the cycle repeats again. 6 The emotional and psychological components of the illness might lead to suicide, either intentionally or unintentionally. 7 According to a research done at a Pakistani military hospital, over 80% of participants had no intention of committing suicide. Furthermore, it has been determined that improvements in NSSI prevention and intervention are necessary as a stand-alone strategy. The quasi-experimental study took place in Islamabad and Rawalpindi, Pakistan, from May 2019 to April 2020 and included students aged 18 to 25 years old from several institutions and colleges in the twin cities. According to the Diagnostic and Statistical Manual of Mental Disorders 5th edition, the individuals were assessed for >2 episodes without suicidal intent using the intentional self-harm questionnaire. The Clinician Administered Nonsuicidal Self-Injury Disorder Index (CANDI) was used to assess the individual for NSSI condition. Symptom checklist-90 (SCL-90) was used to screen the participants for differential diagnosis. Mclean Screening Instrument for Borderline Personality Disorder (MSI-BPD) was used to rule out the BPD cases. The selected participants, n=106 from the screening sample of 600, were selected. They were further screened to meet the inclusion and exclusion criteria, and the total of n=71 were split into two groups: intervention and control. The intervention group received cognitive behavior hypnotherapy for the next three months, whereas the control group received no treatment. After the period of three months, both the groups went through the post assessment, and after the three months’ period, follow-up assessment was conducted. The groups were evaluated, and SPSS 25 was used to analyse the data. The results showed that each of the two groups had 30 (50 percent) of the 60 participants. There were 41 males (68 percent) and 19 girls (32 percent) in all. The bulk of the participants were between the ages of 21 and 23. (48 percent). Self-harm events were reported by 48 (80 percent) of the pupils, and suicide ideation was found in 6 (ten percent). In terms of pre- and post-intervention values (d=4.90), post-intervention and follow-up assessment values (d=0.32), and pre-intervention and follow-up values (d=5.42), the study's effect size was good. The comparison of treatment and no-treatment groups revealed that treatment was more successful than no-treatment, F (1, 58) = 53.16, p.001. The results reveal that the treatment manual of CBH is effective for Nonsuicidal self-injury disorder.

Keywords: NSSI, nonsuicidal self injury disorder, self-harm, self-injury, Cognitive behaviour hypnotherapy, CBH

Procedia PDF Downloads 161
376 A Diagnostic Accuracy Study: Comparison of Two Different Molecular-Based Tests (Genotype HelicoDR and Seeplex Clar-H. pylori ACE Detection), in the Diagnosis of Helicobacter pylori Infections

Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor

Abstract:

Aim: The aim of this study was to compare diagnostic values of two different molecular-based tests (GenoType® HelicoDR ve Seeplex® H. pylori-ClaR- ACE Detection) in detection presence of the H. pylori from gastric biopsy specimens. In addition to this also was aimed to determine resistance ratios of H. pylori strains against to clarytromycine and quinolone isolated from gastric biopsy material cultures by using both the genotypic (GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection) and phenotypic (gradient strip, E-test) methods. Material and methods: A total of 266 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-June 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in all the biopsy samples was investigated by five differnt dignostic methods together: culture (C) (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (H) (Giemsa, Hematoxylin and Eosin staining), rapid urease test (RUT) (CLOtest, Cimberly-Clark, USA), and two different molecular tests; GenoType® HelicoDR, Hain, Germany, based on DNA strip assay, and Seeplex ® H. pylori -ClaR- ACE Detection, Seegene, South Korea, based on multiplex PCR. Antimicrobial resistance of H. pylori isolates against clarithromycin and levofloxacin was determined by GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, and gradient strip (E-test, bioMerieux, France) methods. Culture positivity alone or positivities of both histology and RUT together was accepted as the gold standard for H. pylori positivity. Sensitivity and specificity rates of two molecular methods used in the study were calculated by taking the two gold standards previously mentioned. Results: A total of 266 patients between 16-83 years old who 144 (54.1 %) were female, 122 (45.9 %) were male were included in the study. 144 patients were found as culture positive, and 157 were H and RUT were positive together. 179 patients were found as positive with GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection together. Sensitivity and specificity rates of studied five different methods were found as follows: C were 80.9 % and 84.4 %, H + RUT were 88.2 % and 75.4 %, GenoType® HelicoDR were 100 % and 71.3 %, and Seeplex ® H. pylori -ClaR- ACE Detection were, 100 % and 71.3 %. A strong correlation was found between C and H+RUT, C and GenoType® HelicoDR, and C and Seeplex ® H. pylori -ClaR- ACE Detection (r:0.644 and p:0.000, r:0.757 and p:0.000, r:0.757 and p:0.000, respectively). Of all the isolated 144 H. pylori strains 24 (16.6 %) were detected as resistant to claritromycine, and 18 (12.5 %) were levofloxacin. Genotypic claritromycine resistance was detected only in 15 cases with GenoType® HelicoDR, and 6 cases with Seeplex ® H. pylori -ClaR- ACE Detection. Conclusion: In our study, it was concluded that; GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection was found as the most sensitive diagnostic methods when comparing all the investigated other ones (C, H, and RUT).

Keywords: Helicobacter pylori, GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, antimicrobial resistance

Procedia PDF Downloads 138
375 Comparative Assessment of Heavy Metals Influence on Growth of Silver Catfish (Chrysichthys nigrodigitatus) and Tilapia Fish (Oreochromis niloticus) Collected from Brackish and Freshwater, South-West, Nigeria

Authors: Atilola O. Abidemi-Iromini, Oluayo A. Bello-Olusoji, Immanuel A. Adebayo

Abstract:

Ecological studies were carried out in Asejire Reservoir (AR) and Lagos Lagoon (LL), Southwest Nigeria from January 2012 to December 2013 to determine the health status of Chrysichthys nigrodigitatus (CN) and Oreochromis niloticus (ON). The fish species samples were collected every month, these were separated into sexes, and growth pattern {length, (cm); weight (g), Isometric index, condition factor} were measured. Heavy metals (lead (Pb), iron (Fe), zinc (Zn), copper (Cu), and chromium (Cr) in ppm concentrations were also determined while bacteria occurrence(s), (load and prevalence) on fish skins, gills and intestine in the two ecological zones were determined. The fish ratio collected is in range with normal aquatic (1: 1+) male: female ratio. Growth assessment determined revealed no significant difference in length and weight in O. niloticus between locations, but a significant difference in weight occurred in C. nigrodigitatus between locations, with a higher weight (196.06 ±0.16 g) from Lagos Lagoon. Highest condition factor (5.25) was recorded in Asejire Reservoir O. niloticus, (ARON); and lowest condition factor (1.64) was observed in Asejire Reservoir C. nigrodigitatus (ARCN); as this indicated a negative allometric value which is normal in Bagridae species because it increases more in Length to weight gain than for the Cichlidae growth status. Normal growth rate (K > 1) occurred between sexes, with the male species having higher K - factors than female species within locations, between locations, between species, and within species, except for female C. nigrodigitatus having higher condition factor (K = 1.75) than male C. nigrodigitatus (K = 1.54) in Asejire Reservoir. The highest isometric value (3.05) was recorded in Asejire Reservoir O. niloticus and lowest in Lagos Lagoon C. nigrodigitatus. Male O. niloticus from Asejire Reservoir had highest isometric value, and O. niloticus species had higher condition factor which ranged between isometric (b ≤ 3) and positive allometric (b > 3), hence, denoted robustness of fish to grow more in weight than in length; while C. nigrodigitatus fish has negative allometric (b < 3) indicating fish add more length than in weight on growth. The status of condition factors and isometric values obtained is species-specific, and environmental influence, food availability or reproduction factor may as well be contributing factors. The concentrations of heavy metals in fish flesh revealed that Zn (6.52 ±0.82) had the highest, while Cr (0.01±0.00) was ranked lowest; for O. niloticus in Asejire Reservoir. In Lagos Lagoon, heavy metals concentration level revealed that O. niloticus flesh had highest in Zn (4.71±0.25) and lowest in Pb (0.01±0.00). Lagos Lagoon C. nigrodigitatus heavy metal concentration level revealed Zn (9.56±0.96) had highest, while Cr (0.06±0.01) had lowest; and Asejire Reservoir C. nigrodigitatus heavy metal level revealed that Zn (8.26 ±0.74) had highest, and Cr (0.08±0.00) had lowest. In all, Zinc (Zn) was top-ranked in level among species.

Keywords: Oreochromis niloticus, growth status, Chrysichthys nigrodigitatus, environments, heavy metals

Procedia PDF Downloads 100
374 Relationship between Iron-Related Parameters and Soluble Tumor Necrosis Factor-Like Weak Inducer of Apoptosis in Obese Children

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Iron is physiologically essential. However, it also participates in the catalysis of free radical formation reactions. Its deficiency is associated with amplified health risks. This trace element establishes some links with another physiological process related to cell death, apoptosis. Both iron deficiency and iron overload are closely associated with apoptosis. Soluble tumor necrosis factor-like weak inducer of apoptosis (sTWEAK) has the ability to trigger apoptosis and plays a dual role in the physiological versus pathological inflammatory responses of tissues. The aim of this study was to investigate the status of these parameters as well as the associations among them in children with obesity, a low-grade inflammatory state. The study was performed on groups of children with normal body mass index (N-BMI) and obesity. Forty-three children were included in each group. Based upon age- and sex-adjusted BMI percentile tables prepared by World Health Organization, children whose values varied between 85 and 15 were included in N-BMI group. Children whose BMI percentile values were between 99 and 95 comprised obese (OB) group. Institutional ethical committee approval and informed consent forms were taken prior to the study. Anthropometric measurements (weight, height, waist circumference, hip circumference, head circumference, neck circumference) and blood pressure values (systolic blood pressure and diastolic blood pressure) were recorded. Routine biochemical analysis including serum iron, total iron binding capacity (TIBC), transferrin saturation percent (Tf Sat %), and ferritin were performed. Soluble tumor necrosis factor-like weak inducer of apoptosis levels were determined by enzyme-linked immunosorbent assay. Study data was evaluated using appropriate statistical tests performed by the statistical program SPSS. Serum iron levels were 91±34 mcrg/dl and 75±31 mcrg/dl in N-BMI and OB children, respectively. The corresponding values for TIBC, Tf Sat %, ferritin were 265 mcrg/dl vs 299 mcrg/dl, 37.2±19.1 % vs 26.7±14.6 %, and 41±25 ng/ml vs 44±26 ng/ml. in N-BMI and OB groups, sTWEAK concentrations were measured as 351 ng/L and 325 ng/L, respectively (p>0.05). Correlation analysis revealed significant associations between sTWEAK levels and iron related parameters (p<0.05) except ferritin. In conclusion, iron contributes to apoptosis. Children with iron deficiency have decreased apoptosis rate in comparison with that of healthy children. sTWEAK is inducer of apoptosis. Obese children had lower levels of both iron and sTWEAK. Low levels of sTWEAK are associated with several types of cancers and poor survival. Although iron deficiency state was not observed in this study, the correlations detected between decreased sTWEAK and decreased iron as well as Tf Sat % values were valuable findings, which point out decreased apoptosis. This may induce a proinflammatory state, potentially leading to malignancies in the future lives of obese children.

Keywords: apoptosis, children, iron-related parameters, obesity, soluble tumor necrosis factor-like weak inducer of apoptosis

Procedia PDF Downloads 112
373 Predicting Mortality among Acute Burn Patients Using BOBI Score vs. FLAMES Score

Authors: S. Moustafa El Shanawany, I. Labib Salem, F. Mohamed Magdy Badr El Dine, H. Tag El Deen Abd Allah

Abstract:

Thermal injuries remain a global health problem and a common issue encountered in forensic pathology. They are a devastating cause of morbidity and mortality in children and adults especially in developing countries, causing permanent disfigurement, scarring and grievous hurt. Burns have always been a matter of legal concern in cases of suicidal burns, self-inflicted burns for false accusation and homicidal attempts. Assessment of burn injuries as well as rating permanent disabilities and disfigurement following thermal injuries for the benefit of compensation claims represents a challenging problem. This necessitates the development of reliable scoring systems to yield an expected likelihood of permanent disability or fatal outcome following burn injuries. The study was designed to identify the risk factors of mortality in acute burn patients and to evaluate the applicability of FLAMES (Fatality by Longevity, APACHE II score, Measured Extent of burn, and Sex) and BOBI (Belgian Outcome in Burn Injury) model scores in predicting the outcome. The study was conducted on 100 adult patients with acute burn injuries admitted to the Burn Unit of Alexandria Main University Hospital, Egypt from October 2014 to October 2015. Victims were examined after obtaining informed consent and the data were collected in specially designed sheets including demographic data, burn details and any associated inhalation injury. Each burn patient was assessed using both BOBI and FLAMES scoring systems. The results of the study show the mean age of patients was 35.54±12.32 years. Males outnumbered females (55% and 45%, respectively). Most patients were accidently burnt (95%), whereas suicidal burns accounted for the remaining 5%. Flame burn was recorded in 82% of cases. As well, 8% of patients sustained more than 60% of total burn surface area (TBSA) burns, 19% of patients needed mechanical ventilation, and 19% of burnt patients died either from wound sepsis, multi-organ failure or pulmonary embolism. The mean length of hospital stay was 24.91±25.08 days. The mean BOBI score was 1.07±1.27 and that of the FLAMES score was -4.76±2.92. The FLAMES score demonstrated an area under the receiver operating characteristic (ROC) curve of 0.95 which was significantly higher than that of the BOBI score (0.883). A statistically significant association was revealed between both predictive models and the outcome. The study concluded that both scoring systems were beneficial in predicting mortality in acutely burnt patients. However, the FLAMES score could be applied with a higher level of accuracy.

Keywords: BOBI, burns, FLAMES, scoring systems, outcome

Procedia PDF Downloads 307
372 Initial Resistance Training Status Influences Upper Body Strength and Power Development

Authors: Stacey Herzog, Mitchell McCleary, Istvan Kovacs

Abstract:

Purpose: Maximal strength and maximal power are key athletic abilities in many sports disciplines. In recent years, velocity-based training (VBT) with a relatively high 75-85% 1RM resistance has been popularized in preparation for powerlifting and various other sports. The purpose of this study was to discover differences between beginner/intermediate and advanced lifters’ push/press performances after a heavy resistance-based BP training program. Methods: A six-week, three-workouts per week program was administered to 52 young, physically active adults (age: 22.4±5.1; 12 female). The majority of the participants (84.6%) had prior experience in bench pressing. Typical workouts began with BP using 75-95% 1RM in the 1-5 repetition range. The sets in the lower part of the range (75-80% 1RM) were performed with velocity-focus as well. The BP sets were followed by seated dumbbell presses and six additional upper-body assistance exercises. Pre- and post-tests were conducted on five test exercises: one-repetition maximum BP (1RM), calculated relative strength index: BP/BW (RSI), four-repetition maximal-effort dynamic BP for peak concentric velocity with 80% 1RM (4RV), 4-repetition ballistic pushups (BPU) for height (4PU), and seated medicine ball toss for distance (MBT). For analytic purposes, the participant group was divided into two subgroups: self-indicated beginner or intermediate initial resistance training status (BITS) [n=21, age: 21.9±3.6; 10 female] and advanced initial resistance training status (ATS) [n=31, age: 22.7±5.9; 2 female]. Pre- and post-test results were compared within subgroups. Results: Paired-sample t-tests indicated significant within-group improvements in all five test exercises in both groups (p < 0.05). BITS improved 18.1 lbs. (13.0%) in 1RM, 0.099 (12.8%) in RSI, 0.133 m/s (23.3%) in 4RV, 1.55 in. (27.1%) in BPU, and 1.00 ft. (5.8%) in MBT, while the ATS group improved 13.2 lbs. (5.7%) in 1RM, 0.071 (5.8%) in RSI, 0.051 m/s (9.1%) in 4RV, 1.20 in. (13.7%) in BPU, and 1.15 ft. (5.5%) in MBT. Conclusion: While the two training groups had different initial resistance training backgrounds, both showed significant improvements in all test exercises. As expected, the beginner/intermediate group displayed better relative improvements in four of the five test exercises. However, the medicine ball toss, which had the lightest resistance among the tests, showed similar relative improvements between the two groups. These findings relate to two important training principles: specificity and transfer. The ATS group had more specific experiences with heavy-resistance BP. Therefore, fewer improvements were detected in their test performances with heavy resistances. On the other hand, while the heavy resistance-based training transferred to increased power outcomes in light-resistance power exercises, the difference in the rate of improvement between the two groups disappeared. Practical applications: Based on initial training status, S&C coaches should expect different performance gains in maximal strength training-specific test exercises. However, the transfer from maximal strength to a non-training-specific performance category along the F-v curve continuum (i.e., light resistance and high velocity) might not depend on initial training status.

Keywords: exercise, power, resistance training, strength

Procedia PDF Downloads 37
371 Geoinformation Technology of Agricultural Monitoring Using Multi-Temporal Satellite Imagery

Authors: Olena Kavats, Dmitry Khramov, Kateryna Sergieieva, Vladimir Vasyliev, Iurii Kavats

Abstract:

Geoinformation technologies of space agromonitoring are a means of operative decision making support in the tasks of managing the agricultural sector of the economy. Existing technologies use satellite images in the optical range of electromagnetic spectrum. Time series of optical images often contain gaps due to the presence of clouds and haze. A geoinformation technology is created. It allows to fill gaps in time series of optical images (Sentinel-2, Landsat-8, PROBA-V, MODIS) with radar survey data (Sentinel-1) and use information about agrometeorological conditions of the growing season for individual monitoring years. The technology allows to perform crop classification and mapping for spring-summer (winter and spring crops) and autumn-winter (winter crops) periods of vegetation, monitoring the dynamics of crop state seasonal changes, crop yield forecasting. Crop classification is based on supervised classification algorithms, takes into account the peculiarities of crop growth at different vegetation stages (dates of sowing, emergence, active vegetation, and harvesting) and agriculture land state characteristics (row spacing, seedling density, etc.). A catalog of samples of the main agricultural crops (Ukraine) is created and crop spectral signatures are calculated with the preliminary removal of row spacing, cloud cover, and cloud shadows in order to construct time series of crop growth characteristics. The obtained data is used in grain crop growth tracking and in timely detection of growth trends deviations from reference samples of a given crop for a selected date. Statistical models of crop yield forecast are created in the forms of linear and nonlinear interconnections between crop yield indicators and crop state characteristics (temperature, precipitation, vegetation indices, etc.). Predicted values of grain crop yield are evaluated with an accuracy up to 95%. The developed technology was used for agricultural areas monitoring in a number of Great Britain and Ukraine regions using EOS Crop Monitoring Platform (https://crop-monitoring.eos.com). The obtained results allow to conclude that joint use of Sentinel-1 and Sentinel-2 images improve separation of winter crops (rapeseed, wheat, barley) in the early stages of vegetation (October-December). It allows to separate successfully the soybean, corn, and sunflower sowing areas that are quite similar in their spectral characteristics.

Keywords: geoinformation technology, crop classification, crop yield prediction, agricultural monitoring, EOS Crop Monitoring Platform

Procedia PDF Downloads 419
370 Analysis and Comparison of Asymmetric H-Bridge Multilevel Inverter Topologies

Authors: Manel Hammami, Gabriele Grandi

Abstract:

In recent years, multilevel inverters have become more attractive for single-phase photovoltaic (PV) systems, due to their known advantages over conventional H-bridge pulse width-modulated (PWM) inverters. They offer improved output waveforms, smaller filter size, lower total harmonic distortion (THD), higher output voltages and others. The most common multilevel converter topologies, presented in literature, are the neutral-point-clamped (NPC), flying capacitor (FC) and Cascaded H-Bridge (CHB) converters. In both NPC and FC configurations, the number of components drastically increases with the number of levels what leads to complexity of the control strategy, high volume, and cost. Whereas, increasing the number of levels in case of the cascaded H-bridge configuration is a flexible solution. However, it needs isolated power sources for each stage, and it can be applied to PV systems only in case of PV sub-fields. In order to improve the ratio between the number of output voltage levels and the number of components, several hybrids and asymmetric topologies of multilevel inverters have been proposed in the literature such as the FC asymmetric H-bridge (FCAH) and the NPC asymmetric H-bridge (NPCAH) topologies. Another asymmetric multilevel inverter configuration that could have interesting applications is the cascaded asymmetric H-bridge (CAH), which is based on a modular half-bridge (two switches and one capacitor, also called level doubling network, LDN) cascaded to a full H-bridge in order to double the output voltage level. This solution has the same number of switches as the above mentioned AH configurations (i.e., six), and just one capacitor (as the FCAH). CAH is becoming popular, due to its simple, modular and reliable structure, and it can be considered as a retrofit which can be added in series to an existing H-Bridge configuration in order to double the output voltage levels. In this paper, an original and effective method for the analysis of the DC-link voltage ripple is given for single-phase asymmetric H-bridge multilevel inverters based on level doubling network (LDN). Different possible configurations of the asymmetric H-Bridge multilevel inverters have been considered and the analysis of input voltage and current are analytically determined and numerically verified by Matlab/Simulink for the case of cascaded asymmetric H-bridge multilevel inverters. A comparison between FCAH and the CAH configurations is done on the basis of the analysis of the DC and voltage ripple for the DC source (i.e., the PV system). The peak-to-peak DC and voltage ripple amplitudes are analytically calculated over the fundamental period as a function of the modulation index. On the basis of the maximum peak-to-peak values of low frequency and switching ripple voltage components, the DC capacitors can be designed. Reference is made to unity output power factor, as in case of most of the grid-connected PV generation systems. Simulation results will be presented in the full paper in order to prove the effectiveness of the proposed developments in all the operating conditions.

Keywords: asymmetric inverters, dc-link voltage, level doubling network, single-phase multilevel inverter

Procedia PDF Downloads 186