Search results for: inherent hijacking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 668

Search results for: inherent hijacking

128 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 80
127 The Impact of Undisturbed Flow Speed on the Correlation of Aerodynamic Coefficients as a Function of the Angle of Attack for the Gyroplane Body

Authors: Zbigniew Czyz, Krzysztof Skiba, Miroslaw Wendeker

Abstract:

This paper discusses the results of aerodynamic investigation of the Tajfun gyroplane body designed by a Polish company, Aviation Artur Trendak. This gyroplane has been studied as a 1:8 scale model. Scaling objects for aerodynamic investigation is an inherent procedure in any kind of designing. If scaling, the criteria of similarity need to be satisfied. The basic criteria of similarity are geometric, kinematic and dynamic. Despite the results of aerodynamic research are often reduced to aerodynamic coefficients, one should pay attention to how values of coefficients behave if certain criteria are to be satisfied. To satisfy the dynamic criterion, for example, the Reynolds number should be focused on. This is the correlation of inertial to viscous forces. With the multiplied flow speed by the specific dimension as a numerator (with a constant kinematic viscosity coefficient), flow speed in a wind tunnel research should be increased as many times as an object is decreased. The aerodynamic coefficients specified in this research depend on the real forces that act on an object, its specific dimension, medium speed and variations in its density. Rapid prototyping with a 3D printer was applied to create the research object. The research was performed with a T-1 low-speed wind tunnel (its diameter of the measurement volume is 1.5 m) and a six-element aerodynamic internal scales, WDP1, at the Institute of Aviation in Warsaw. This T-1 wind tunnel is low-speed continuous operation with open space measurement. The research covered a number of the selected speeds of undisturbed flow, i.e. V = 20, 30 and 40 m/s, corresponding to the Reynolds numbers (as referred to 1 m) Re = 1.31∙106, 1.96∙106, 2.62∙106 for the angles of attack ranging -15° ≤ α ≤ 20°. Our research resulted in basic aerodynamic characteristics and observing the impact of undisturbed flow speed on the correlation of aerodynamic coefficients as a function of the angle of attack of the gyroplane body. If the speed of undisturbed flow in the wind tunnel changes, the aerodynamic coefficients are significantly impacted. At speed from 20 m/s to 30 m/s, drag coefficient, Cx, changes by 2.4% up to 9.9%, whereas lift coefficient, Cz, changes by -25.5% up to 15.7% if the angle of attack of 0° excluded or by -25.5% up to 236.9% if the angle of attack of 0° included. Within the same speed range, the coefficient of a pitching moment, Cmy, changes by -21.1% up to 7.3% if the angles of attack -15° and -10° excluded or by -142.8% up to 618.4% if the angle of attack -15° and -10° included. These discrepancies in the coefficients of aerodynamic forces definitely need to consider while designing the aircraft. For example, if load of certain aircraft surfaces is calculated, additional correction factors definitely need to be applied. This study allows us to estimate the discrepancies in the aerodynamic forces while scaling the aircraft. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: aerodynamics, criteria of similarity, gyroplane, research tunnel

Procedia PDF Downloads 370
126 Opportunities and Challenges in Midwifery Education: A Literature Review

Authors: Abeer M. Orabi

Abstract:

Midwives are being seen as a key factor in returning birth care to a normal physiologic process that is woman-centered. On the other hand, more needs to be done to increase access for every woman to professional midwifery care. Because of the nature of the midwifery specialty, the magnitude of the effect that can result from a lack of knowledge if midwives make a mistake in their care has the potential to affect a large number of the birthing population. So, the development, running, and management of midwifery educational programs should follow international standards and come after a thorough community needs assessment. At the same time, the number of accredited midwifery educational programs needs to be increased so that larger numbers of midwives will be educated and qualified, as well as access to skilled midwifery care will be increased. Indeed, the selection of promising midwives is important for the successful completion of an educational program, achievement of the program goals, and retention of graduates in the field. Further, the number of schooled midwives in midwifery education programs, their background, and their experience constitute some concerns in the higher education industry. Basically, preceptors and clinical sites are major contributors to the midwifery education process, as educational programs rely on them to provide clinical practice opportunities. In this regard, the selection of clinical training sites should be based on certain criteria to ensure their readiness for the intended training experiences. After that, communication, collaboration, and liaison between teaching faculty and field staff should be maintained. However, the shortage of clinical preceptors and the massive reduction in the number of practicing midwives, in addition to unmanageable workloads, act as significant barriers to midwifery education. Moreover, the medicalized approach inherent in the hospital setting makes it difficult to practice the midwifery model of care, such as watchful waiting, non-interference in normal processes, and judicious use of interventions. Furthermore, creating a motivating study environment is crucial for avoiding unnecessary withdrawal and retention in any educational program. It is well understood that research is an essential component of any profession for achieving its optimal goal and providing a foundation and evidence for its practices, and midwifery is no exception. Midwives have been playing an important role in generating their own research. However, the selection of novel, researchable, and sustainable topics considering community health needs is also a challenge. In conclusion, ongoing education and research are the lifeblood of the midwifery profession to offer a highly competent and qualified workforce. However, many challenges are being faced, and barriers are hindering their improvement.

Keywords: barriers, challenges, midwifery education, educational programs

Procedia PDF Downloads 91
125 Coping Strategies Used by Pregnant Women in India to Overcome the Psychological Impact of COVID-19

Authors: Harini Atturu, Divyani Byagari, Bindhu Rani Thumkunta, Sahitya Bammidi, Manasa Badveli

Abstract:

Introduction: Biological, psychological and social domains influence the outcomes of pregnancy. The COVID19 pandemic had a significant effect on the psychological and social domains of pregnant women all over the world. Everyone has inherent coping mechanisms which ultimately determine the actual impact of such external stimulus on outcomes of pregnancy. This study aims to understand the coping strategies used by pregnant women to overcome the psychological impact of the first wave of the COVID 19 pandemic. Methods: Institutional ethics permission was sought. All pregnant women attending antenatal clinics in the institution during September 2020 were included in the study. Brief-COPE, a self-rated questionnaire, was provided to understand the coping strategies used by them. The Questionnaire consists of 28 questions that fall into 14 themes. These 14 themes were mapped into four domains consisting of Approaching coping (APC) styles, Avoidant Coping (AVC) styles, Humor and Religion. The results were analyzed using univariate and multivariate analysis. Factor analysis was performed to identify themes that are most frequently used. Results: 162 pregnant women were included in the study. The majority of the women were aged between 18 and 30 (90.1%). 60.9% of the respondents were having their first pregnancy and were in the 2nd trimester (59.6%). The majority of them were living in the city (74%), belonged to the middle class (77.6%) and were not working (70.1%). 56 respondents (34.6%) reported that they had contact with suspected or Covid positive patients. Many were worried that their pregnancy might be complicated (43%), their baby may contract COVID (45%) and their family members could get COVID during the hospital visits for antenatal check-ups. 33.6% of women admitted missing their regular antenatal check-ups because of the above concerns. All respondents used various coping strategies to overcome the psychological impact of COVID 19. Out of the 4 coping strategies, participants scored high on Religion with a mean of 5.471.45 followed by Approaching Coping (APC) styles (5.131.25), Humor (4.592.07) and Avoidant Coping (AVC) styles (4.130.88). Religion as a coping strategy scored high for all respondents irrespective of age, parity, trimester, social and employment status. Exploratory Factor analysis revealed two cluster groups that explained 68% of the variance, with Component 1 contributing to 58.9% and component 2 contributing 9.13% of the variance. Humor, Acceptance, Planning, and Religion were the top 4 factors that showed strong loadings. Conclusion: Most of the pregnant women were worried about the negative impact of COVID 19 on the outcomes of their pregnancy. Religion and Approaching coping styles seem to be the predominant coping strategies used by them. Interventionists and policymakers should consider these factors while providing support to such women.

Keywords: coping strategies, pregnancy, COVID-19, brief-COPE

Procedia PDF Downloads 111
124 ‘Doctor Knows Best’: Reconsidering Paternalism in the NICU

Authors: Rebecca Greenberg, Nipa Chauhan, Rashad Rehman

Abstract:

Paternalism, in its traditional form, seems largely incompatible with Western medicine. In contrast, Family-Centred Care, a partial response to historically authoritative paternalism, carries its own challenges, particularly when operationalized as family-directed care. Specifically, in neonatology, decision-making is left entirely to Substitute Decision Makers (most commonly parents). Most models of shared decision-making employ both the parents’ and medical team’s perspectives but do not recognize the inherent asymmetry of information and experience – asking parents to act like physicians to evaluate technical data and encourage physicians to refrain from strong medical opinions and proposals. They also do not fully appreciate the difficulties in adjudicating which perspective to prioritize and, moreover, how to mitigate disagreement. Introducing a mild form of paternalism can harness the unique skillset both parents and clinicians bring to shared decision-making and ultimately work towards decision-making in the best interest of the child. The notion expressed here is that within the model of shared decision-making, mild paternalism is prioritized inasmuch as optimal care is prioritized. This mild form of paternalism is known as Beneficent Paternalism and justifies our encouragement for physicians to root down in their own medical expertise to propose treatment plans informed by medical expertise, standards of care, and the parents’ values. This does not mean that we forget that paternalism was historically justified on ‘beneficent’ grounds; however, our recommendation is that a re-integration of mild paternalism is appropriate within our current Western healthcare climate. Through illustrative examples from the NICU, this paper explores the appropriateness and merits of Beneficent Paternalism and ultimately its use in promoting family-centered care, patient’s best interests and reducing moral distress. A distinctive feature of the NICU is the fact that communication regarding a patient’s treatment is exclusively done with substitute decision-makers and not the patient, i.e., the neonate themselves. This leaves the burden of responsibility entirely on substitute decision-makers and the clinical team; the patient in the NICU does not have any prior wishes, values, or beliefs that can guide decision-making on their behalf. Therefore, the wishes, values, and beliefs of the parent become the map upon which clinical proposals are made, giving extra weight to the family’s decision-making responsibility. This leads to why Family Directed Care is common in the NICU, where shared decision-making is mandatory. However, the zone of parental discretion is not as all-encompassing as it is currently considered; there are appropriate times when the clinical team should strongly root down in medical expertise and perhaps take the lead in guiding family decision-making: this is just what it means to adopt Beneficent Paternalism.

Keywords: care, ethics, expertise, NICU, paternalism

Procedia PDF Downloads 115
123 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review

Authors: Sara A. Ben Lashihar

Abstract:

The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.

Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric

Procedia PDF Downloads 142
122 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space

Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson

Abstract:

Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.

Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling

Procedia PDF Downloads 212
121 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 204
120 Analyzing Brand Related Information Disclosure and Brand Value: Further Empirical Evidence

Authors: Yves Alain Ach, Sandra Rmadi Said

Abstract:

An extensive review of literature in relation to brands has shown that little research has focused on the nature and determinants of the information disclosed by companies with respect to the brands they own and use. The objective of this paper is to address this issue. More specifically, the aim is to characterize the nature of the information disclosed by companies in terms of estimating the value of brands and to identify the determinants of that information according to the company’s characteristics most frequently tested by previous studies on the disclosure of information on intangible capital, by studying the practices of a sample of 37 French companies. Our findings suggest that companies prefer to communicate accounting, economic and strategic information in relation to their brands instead of providing financial information. The analysis of the determinants of the information disclosed on brands leads to the conclusion that the groups which operate internationally and have chosen a category 1 auditing firm to communicate more information to investors in their annual report. Our study points out that the sector is not an explanatory variable for voluntary brand disclosure, unlike previous studies on intangible capital. Our study is distinguished by the study of an element that has been little studied in the financial literature, namely the determinants of brand-related information. With regard to the effect of size on brand-related information disclosure, our research does not confirm this link. Many authors point out that large companies tend to publish more voluntary information in order to respond to stakeholder pressure. Our study also establishes that the relationship between brand information supply and performance is insignificant. This relationship is already controversial by previous research, and it shows that higher profitability motivates managers to provide more information, as this strengthens investor confidence and may increase managers' compensation. Our main contribution focuses on the nature of the inherent characteristics of the companies that disclose the most information about brands. Our results show the absence of a link between size and industry on the one hand and the supply of brand information on the other, contrary to previous research. Our analysis highlights three types of information disclosed about brands: accounting, economics and strategy. We, therefore, question the reasons that may lead companies to voluntarily communicate mainly accounting, economic and strategic information in relation to our study from one year to the next and not to communicate detailed information that would allow them to reconstitute the financial value of their brands. Our results can be useful for companies and investors. Our results highlight, to our surprise, the lack of financial information that would allow investors to understand a better valuation of brands. We believe that additional information is needed to improve the quality of accounting and financial information related to brands. The additional information provided in the special report that we recommend could be called a "report on intangible assets”.

Keywords: brand related information, brand value, information disclosure, determinants

Procedia PDF Downloads 56
119 Insect Cell-Based Models: Asutralian Sheep bBlowfly Lucilia Cuprina Embryo Primary Cell line Establishment and Transfection

Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody

Abstract:

Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls, and the parasite has developed resistance to nearly all control chemicals used in the past. It is, therefore, critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi, and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.

Keywords: lucilia cuprina, primary cell line establishment, RNA interference, insect cell transfection

Procedia PDF Downloads 52
118 Theory of Apokatástasis - „in This Way, While Paying Attention to Their Knowledge and Wisdom, Nonetheless, They Did Not Ask God about These Matters, as to Whether or Not They Are True...“

Authors: Pikria Vardosanidze

Abstract:

The term Apokatástasis (Greek: Apokatástasis) is Greek and means "re-establishment", the universal resurrection. The term dates back to ancient times, in Stoic thought denoting the end of a constantly evolving cycle of the universe and the beginning of a new beginning, established in Christendom by the Eastern Fathers and Origen as the return of the entire created world to a state of goodness. "Universal resurrection" means the resurrection of mankind after the second coming of Jesus Christ. The first thing the Savior will do immediately upon His glorious coming will be that "the dead will be raised up first by Christ." God's animal action will apply to all the dead, but not with the same result. The action of God also applies to the living, which is accomplished by changing their bodies. The degree of glorification of the resurrected body will be commensurate with the spiritual life. An unclean body will not be glorified, and the soul will not be happy. He, as a resurrected body, will be unbelieving, strong, and spiritual, but because of the action of the passions, all this will only bring suffering to the body. The court judges both the soul and the flesh. At the same time, St. The letter nowhere says that at the last 4trial, someone will be able to change their own position. In connection with this dogmatic teaching, one of the greatest fathers of the Church, Sts. Gregory Nossell had a different view. He points out that the miracle of the resurrection is so glorious and sublime that it exceeds our faith. There are two important circumstances: one is the reality of the resurrection itself, and the other is the face of its fulfillment. The first is founded by Gregory Nossell on the Uado authority, Sts. In the letter: Jesus Christ preached about the resurrection of Christ and also foretold many other events, all of which were later fulfilled. Gregory Nossell clarifies the issues of the substantiality of good and evil and the relationship between them and notes that only good has an inherent dependence on nothing because it originated from nothing and exists eternally in God. As for evil, it has no self-sustaining substance and, therefore, no existence. It appears only through the free will of man from time to time. As St., The Father says that God is the supreme goodness that gives beings the power to exist in existence , all others who are without Him are non-existent. St. The above-mentioned opinion of the father about the universal apocatastasis comes from the thought of Origen. This teaching was introduced by the resolution of the Fifth World Ecclesiastical Assembly. Finally, it was unanimously stated by ecclesiastical figures that the doctrine of universal salvation is not valid. For if the resurrection takes place in this way, that is, all beings, including the evil spirit, are resurrected, then the worldly controversy between good and evil, the future common denominator, the eternal torment - all that Christian dogma acknowledges.

Keywords: apolatastasisi ortodox, orthodox doctrine, gregogory of nusse, eschatology

Procedia PDF Downloads 83
117 Biosorption of Nickel by Penicillium simplicissimum SAU203 Isolated from Indian Metalliferous Mining Overburden

Authors: Suchhanda Ghosh, A. K. Paul

Abstract:

Nickel, an industrially important metal is not mined in India, due to the lack of its primary mining resources. But, the chromite deposits occurring in the Sukinda and Baula-Nuasahi region of Odhisa, India, is reported to contain around 0.99% of nickel entrapped in the goethite matrix of the lateritic iron rich ore. Weathering of the dumped chromite mining overburden often leads to the contamination of the ground as well as the surface water with toxic nickel. Microbes inherent to this metal contaminated environment are reported to be capable of removal as well as detoxification of various metals including nickel. Nickel resistant fungal isolates obtained in pure form from the metal rich overburden were evaluated for their potential to biosorb nickel by using their dried biomass. Penicillium simplicissimum SAU203 was the best nickel biosorbant among the 20 fungi tested and was capable to sorbing 16.85 mg Ni/g biomass from a solution containing 50 mg/l of Ni. The identity of the isolate was confirmed using 18S rRNA gene analysis. The sorption capacity of the isolate was further standardized following Langmuir and Freundlich adsorption isotherm models and the results reflected energy efficient sorption. Fourier-transform infrared spectroscopy studies of the nickel loaded and control biomass in a comparative basis revealed the involvement of hydroxyl, amine and carboxylic groups in Ni binding. The sorption process was also optimized for several standard parameters like initial metal ion concentration, initial sorbet concentration, incubation temperature and pH, presence of additional cations and pre-treatment of the biomass by different chemicals. Optimisation leads to significant improvements in the process of nickel biosorption on to the fungal biomass. P. simplicissimum SAU203 could sorb 54.73 mg Ni/g biomass with an initial Ni concentration of 200 mg/l in solution and 21.8 mg Ni/g biomass with an initial biomass concentration of 1g/l solution. Optimum temperature and pH for biosorption was recorded to be 30°C and pH 6.5 respectively. Presence of Zn and Fe ions improved the sorption of Ni(II), whereas, cobalt had a negative impact. Pre-treatment of biomass with various chemical and physical agents has affected the proficiency of Ni sorption by P. simplicissimum SAU203 biomass, autoclaving as well as treatment of biomass with 0.5 M sulfuric acid and acetic acid reduced the sorption as compared to the untreated biomass, whereas, NaOH and Na₂CO₃ and Twin 80 (0.5 M) treated biomass resulted in augmented metal sorption. Hence, on the basis of the present study, it can be concluded that P. simplicissimum SAU203 has the potential for the removal as well as detoxification of nickel from contaminated environments in general and particularly from the chromite mining areas of Odhisa, India.

Keywords: nickel, fungal biosorption, Penicillium simplicissimum SAU203, Indian chromite mines, mining overburden

Procedia PDF Downloads 171
116 A Systematic Review of Efficacy and Safety of Radiofrequency Ablation in Patients with Spinal Metastases

Authors: Pascale Brasseur, Binu Gurung, Nicholas Halfpenny, James Eaton

Abstract:

Development of minimally invasive treatments in recent years provides a potential alternative to invasive surgical interventions which are of limited value to patients with spinal metastases due to short life expectancy. A systematic review was conducted to explore the efficacy and safety of radiofrequency ablation (RFA), a minimally invasive treatment in patients with spinal metastases. EMBASE, Medline and CENTRAL were searched from database inception to March 2017 for randomised controlled trials (RCTs) and non-randomised studies. Conference proceedings for ASCO and ESMO published in 2015 and 2016 were also searched. Fourteen studies were included: three prospective interventional studies, four prospective case series and seven retrospective case series. No RCTs or studies comparing RFA with another treatment were identified. RFA was followed by cement augmentation in all patients in seven studies and some patients (40-96%) in the remaining seven studies. Efficacy was assessed as pain relief in 13/14 studies with the use of a numerical rating scale (NRS) or a visual analogue scale (VAS) at various time points. Ten of the 13 studies reported a significant decrease in pain outcome, post-RFA compared to baseline. NRS scores improved significantly at 1 week (5.9 to 3.5, p < 0.0001; 8 to 4.3, p < 0.02 and 8 to 3.9, p < 0.0001) and this improvement was maintained at 1 month post-RFA compared to baseline (5.9 to 2.6, p < 0.0001; 8 to 2.9, p < 0.0003; 8 to 2.9, p < 0.0001). Similarly, VAS scores decreased significantly at 1 week (7.5 to 2.7, p=0.00005; 7.51 to 1.73, p < 0.0001; 7.82 to 2.82, p < 0.001) and this pattern was maintained at 1 month post-RFA compared to baseline (7.51 to 2.25, p < 0.0001; 7.82 to 3.3; p < 0.001). A significant pain relief was achieved regardless of whether patients had cement augmentation in two studies assessing the impact of RFA with or without cement augmentation on VAS pain scores. In these two studies, a significant decrease in pain scores was reported for patients receiving RFA alone and RFA+cement at 1 week (4.3 to 1.7. p=0.0004 and 6.6 to 1.7, p=0.003 respectively) and 15-36 months (7.9 to 4, p=0.008 and 7.6 to 3.5, p=0.005 respectively) after therapy. Few minor complications were reported and these included neural damage, radicular pain, vertebroplasty leakage and lower limb pain/numbness. In conclusion, the efficacy and safety of RFA were consistently positive between prospective and retrospective studies with reductions in pain and few procedural complications. However, the lack of control groups in the identified studies indicates the possibility of selection bias inherent in single arm studies. Controlled trials exploring efficacy and safety of RFA in patients with spinal metastases are warranted to provide robust evidence. The identified studies provide an initial foundation for such future trials.

Keywords: pain relief, radiofrequency ablation, spinal metastases, systematic review

Procedia PDF Downloads 150
115 Analyzing Bridge Response to Wind Loads and Optimizing Design for Wind Resistance and Stability

Authors: Abdul Haq

Abstract:

The goal of this research is to better understand how wind loads affect bridges and develop strategies for designing bridges that are more stable and resistant to wind. The effect of wind on bridges is essential to their safety and functionality, especially in areas that are prone to high wind speeds or violent wind conditions. The study looks at the aerodynamic forces and vibrations caused by wind and how they affect bridge construction. Part of the research method involves first understanding the underlying ideas influencing wind flow near bridges. Computational fluid dynamics (CFD) simulations are used to model and forecast the aerodynamic behaviour of bridges under different wind conditions. These models incorporate several factors, such as wind directionality, wind speed, turbulence intensity, and the influence of nearby structures or topography. The results provide significant new insights into the loads and pressures that wind places on different bridge elements, such as decks, pylons, and connections. Following the determination of the wind loads, the structural response of bridges is assessed. By simulating their dynamic behavior under wind-induced forces, Finite Element Analysis (FEA) is used to model the bridge's component parts. This work contributes to the understanding of which areas are at risk of experiencing excessive stresses, vibrations, or oscillations due to wind excitations. Because the bridge has inherent modes and frequencies, the study considers both static and dynamic responses. Various strategies are examined to maximize the design of bridges to withstand wind. It is possible to alter the bridge's geometry, add aerodynamic components, add dampers or tuned mass dampers to lessen vibrations, and boost structural rigidity. Through an analysis of several design modifications and their effectiveness, the study aims to offer guidelines and recommendations for wind-resistant bridge design. In addition to the numerical simulations and analyses, there are experimental studies. In order to assess the computational models and validate the practicality of proposed design strategies, scaled bridge models are tested in a wind tunnel. These investigations help to improve numerical models and prediction precision by providing valuable information on wind-induced forces, pressures, and flow patterns. Using a combination of numerical models, actual testing, and long-term performance evaluation, the project aims to offer practical insights and recommendations for building wind-resistant bridges that are secure, long-lasting, and comfortable for users.

Keywords: wind effects, aerodynamic forces, computational fluid dynamics, finite element analysis

Procedia PDF Downloads 42
114 Public Values in Service Innovation Management: Case Study in Elderly Care in Danish Municipality

Authors: Christian T. Lystbaek

Abstract:

Background: The importance of innovation management has traditionally been ascribed to private production companies, however, there is an increasing interest in public services innovation management. One of the major theoretical challenges arising from this situation is to understand public values justifying public services innovation management. However, there is not single and stable definition of public value in the literature. The research question guiding this paper is: What is the supposed added value operating in the public sphere? Methodology: The study takes an action research strategy. This is highly contextualized methodology, which is enacted within a particular set of social relations into which on expects to integrate the results. As such, this research strategy is particularly well suited for its potential to generate results that can be applied by managers. The aim of action research is to produce proposals with a creative dimension capable of compelling actors to act in a new and pertinent way in relation to the situations they encounter. The context of the study is a workshop on public services innovation within elderly care. The workshop brought together different actors, such as managers, personnel and two groups of users-citizens (elderly clients and their relatives). The process was designed as an extension of the co-construction methods inherent in action research. Scenario methods and focus groups were applied to generate dialogue. The main strength of these techniques is to gather and exploit as much data as possible by exposing the discourse of justification used by the actors to explain or justify their points of view when interacting with others on a given subject. The approach does not directly interrogate the actors on their values, but allows their values to emerge through debate and dialogue. Findings: The public values related to public services innovation management in elderly care were identified in two steps. In the first step, identification of values, values were identified in the discussions. Through continuous analysis of the data, a network of interrelated values was developed. In the second step, tracking group consensus, we then ascertained the degree to which the meaning attributed to the value was common to the participants, classifying the degree of consensus as high, intermediate or low. High consensus corresponds to strong convergence in meaning, intermediate to generally shared meanings between participants, and low to divergences regarding the meaning between participants. Only values with high or intermediate degree of consensus were retained in the analysis. Conclusion: The study shows that the fundamental criterion for justifying public services innovation management is the capacity for actors to enact public values in their work. In the workshop, we identified two categories of public values, intrinsic value and behavioural values, and a list of more specific values.

Keywords: public services innovation management, public value, co-creation, action research

Procedia PDF Downloads 257
113 Combined Effect of Vesicular System and Iontophoresis on Skin Permeation Enhancement of an Analgesic Drug

Authors: Jigar N. Shah, Hiral J. Shah, Praful D. Bharadia

Abstract:

The major challenge faced by formulation scientists in transdermal drug delivery system is to overcome the inherent barriers related to skin permeation. The stratum corneum layer of the skin is working as the rate limiting step in transdermal transport and reduce drug permeation through skin. Many approaches have been used to enhance the penetration of drugs through this layer of the skin. The purpose of this study is to investigate the development and evaluation of a combined approach of drug carriers and iontophoresis as a vehicle to improve skin permeation of an analgesic drug. Iontophoresis is a non-invasive technique for transporting charged molecules into and through tissues by a mild electric field. It has been shown to effectively deliver a variety of drugs across the skin to the underlying tissue. In addition to the enhanced continuous transport, iontophoresis allows dose titration by adjusting the electric field, which makes personalized dosing feasible. Drug carrier could modify the physicochemical properties of the encapsulated molecule and offer a means to facilitate the percutaneous delivery of difficult-to-uptake substances. Recently, there are some reports about using liposomes, microemulsions and polymeric nanoparticles as vehicles for iontophoretic drug delivery. Niosomes, the nonionic surfactant-based vesicles that are essentially similar in properties to liposomes have been proposed as an alternative to liposomes. Niosomes are more stable and free from other shortcoming of liposomes. Recently, the transdermal delivery of certain drugs using niosomes has been envisaged and niosomes have proved to be superior transdermal nanocarriers. Proniosomes overcome some of the physical stability related problems of niosomes. The proniosomal structure was liquid crystalline-compact niosomes hybrid which could be converted into niosomes upon hydration. The combined use of drug carriers and iontophoresis could offer many additional benefits. The system was evaluated for Encapsulation Efficiency, vesicle size, zeta potential, Transmission Electron Microscopy (TEM), DSC, in-vitro release, ex-vivo permeation across skin and rate of hydration. The use of proniosomal gel as a vehicle for the transdermal iontophoretic delivery was evaluated in-vitro. The characteristics of the applied electric current, such as density, type, frequency, and on/off interval ratio were observed. The study confirms the synergistic effect of proniosomes and iontophoresis in improving the transdermal permeation profile of selected analgesic drug. It is concluded that proniosomal gel can be used as a vehicle for transdermal iontophoretic drug delivery under suitable electric conditions.

Keywords: iontophoresis, niosomes, permeation enhancement, transdermal delivery

Procedia PDF Downloads 356
112 The Invisible Planner: Unearthing the Informal Dynamics Shaping Mixed-Use and Compact Development in Ghanaian Cities

Authors: Muwaffaq Usman Adam, Isaac Quaye, Jim Anbazu, Yetimoni Kpeebi, Michael Osei-Assibey

Abstract:

Urban informality, characterized by spontaneous and self-organized practices, plays a significant but often overlooked role in shaping the development of cities, particularly in the context of mixed-use and compact urban environments. This paper aims to explore the invisible planning processes inherent in informal practices and their influence on the urban form of Ghanaian cities. By examining the dynamic interplay between informality and formal planning, the study will discuss the ways in which informal actors shape and plan for mixed-use and compact development. Drawing on the synthesis of relevant secondary data, the research will begin by defining urban informality and identifying the factors that contribute to its prevalence in Ghanaian cities. It will delve into the concept of mixed-use and compact development, highlighting its benefits and importance in urban areas. Drawing on case studies, the paper will uncover the hidden planning processes that occur within informal settlements, showcasing their impact on the physical layout, land use, and spatial arrangements of Ghanaian cities. The study will also uncover the challenges and opportunities associated with informal planning. It examines the constraints faced by informal planners (actors) while also exploring the potential benefits and opportunities that emerge when informality is integrated into formal planning frameworks. By understanding the invisible planner, the research will offer valuable insights into how informal practices can contribute to sustainable and inclusive urban development. Based on the findings, the paper will present policy implications and recommendations. It highlights the need to bridge the policy gaps and calls for the recognition of informal planning practices within formal systems. Strategies are proposed to integrate informality into planning frameworks, fostering collaboration between formal and informal actors to achieve compact and mixed-use development in Ghanaian cities. This research underscores the importance of recognizing and leveraging the invisible planner in Ghanaian cities. By embracing informal planning practices, cities can achieve more sustainable, inclusive, and vibrant urban environments that meet the diverse needs of their residents. This research will also contribute to a deeper understanding of the complex dynamics between informality and planning, advocating for inclusive and collaborative approaches that harness the strengths of both formal and informal actors. The findings will likewise contribute to advancing our understanding of informality's role as an invisible yet influential planner, shedding light on its spatial planning implications on Ghanaian cities.

Keywords: informality, mixed-uses, compact development, land use, ghana

Procedia PDF Downloads 82
111 Subjective Probability and the Intertemporal Dimension of Probability to Correct the Misrelation Between Risk and Return of a Financial Asset as Perceived by Investors. Extension of Prospect Theory to Better Describe Risk Aversion

Authors: Roberta Martino, Viviana Ventre

Abstract:

From a theoretical point of view, the relationship between the risk associated with an investment and the expected value are directly proportional, in the sense that the market allows a greater result to those who are willing to take a greater risk. However, empirical evidence proves that this relationship is distorted in the minds of investors and is perceived exactly the opposite. To deepen and understand the discrepancy between the actual actions of the investor and the theoretical predictions, this paper analyzes the essential parameters used for the valuation of financial assets with greater attention to two elements: probability and the passage of time. Although these may seem at first glance to be two distinct elements, they are closely related. In particular, the error in the theoretical description of the relationship between risk and return lies in the failure to consider the impatience that is generated in the decision-maker when events that have not yet happened occur in the decision-making context. In this context, probability loses its objective meaning and in relation to the psychological aspects of the investor, it can only be understood as the degree of confidence that the investor has in the occurrence or non-occurrence of an event. Moreover, the concept of objective probability does not consider the inter-temporality that characterizes financial activities and does not consider the condition of limited cognitive capacity of the decision maker. Cognitive psychology has made it possible to understand that the mind acts with a compromise between quality and effort when faced with very complex choices. To evaluate an event that has not yet happened, it is necessary to imagine that it happens in your head. This projection into the future requires a cognitive effort and is what differentiates choices under conditions of risk and choices under conditions of uncertainty. In fact, since the receipt of the outcome in choices under risk conditions is imminent, the mechanism of self-projection into the future is not necessary to imagine the consequence of the choice and the decision makers dwell on the objective analysis of possibilities. Financial activities, on the other hand, develop over time and the objective probability is too static to consider the anticipatory emotions that the self-projection mechanism generates in the investor. Assuming that uncertainty is inherent in valuations of events that have not yet occurred, the focus must shift from risk management to uncertainty management. Only in this way the intertemporal dimension of the decision-making environment and the haste generated by the financial market can be cautioned and considered. The work considers an extension of the prospectus theory with the temporal component with the aim of providing a description of the attitude towards risk with respect to the passage of time.

Keywords: impatience, risk aversion, subjective probability, uncertainty

Procedia PDF Downloads 85
110 Unequal Traveling: How School District System and School District Housing Characteristics Shape the Duration of Families Commuting

Authors: Geyang Xia

Abstract:

In many countries, governments have responded to the growing demand for educational resources through school district systems, and there is substantial evidence that school district systems have been effective in promoting inter-district and inter-school equity in educational resources. However, the scarcity of quality educational resources has brought about varying levels of education among different school districts, making it a common choice for many parents to buy a house in the school district where a quality school is located, and they are even willing to bear huge commuting costs for this purpose. Moreover, this is evidenced by the fact that parents of families in school districts with quality education resources have longer average commute lengths and longer average commute distances than parents in average school districts. This "unequal traveling" under the influence of the school district system is more common in school districts at the primary level of education. This further reinforces the differential hierarchy of educational resources and raises issues of inequitable educational public services, education-led residential segregation, and gentrification of school district housing. Against this background, this paper takes Nanjing, a famous educational city in China, as a case study and selects the school districts where the top 10 public elementary schools are located. The study first identifies the spatio-temporal behavioral trajectory dataset of these high-quality school district households by using spatial vector data, decrypted cell phone signaling data, and census data. Then, by constructing a "house-school-work (HSW)" commuting pattern of the population in the school district where the high-quality educational resources are located, and based on the classification of the HSW commuting pattern of the population, school districts with long employment hours were identified. Ultimately, the mechanisms and patterns inherent in this unequal commuting are analyzed in terms of six aspects, including the centrality of school district location, functional diversity, and accessibility. The results reveal that the "unequal commuting" of Nanjing's high-quality school districts under the influence of the school district system occurs mainly in the peripheral areas of the city, and the schools matched with these high-quality school districts are mostly branches of prestigious schools in the built-up areas of the city's core. At the same time, the centrality of school district location and the diversity of functions are the most important influencing factors of unequal commuting in high-quality school districts. Based on the research results, this paper proposes strategies to optimize the spatial layout of high-quality educational resources and corresponding transportation policy measures.

Keywords: school-district system, high quality school district, commuting pattern, unequal traveling

Procedia PDF Downloads 68
109 The Double Standard: Ethical Issues and Gender Discrimination in Traditional Western Ethics

Authors: Merina Islam

Abstract:

The feminists have identified the traditional western ethical theories as basically male centered. Feminists are committed to develop a critique showing how the traditional western ethics together with traditional philosophy, irrespective of the claim for gender neutrality, all throughout remained gender-biased. This exclusion of women’s experiences from the moral discourse is justified on the ground that women cannot be moral agents, since they are not rational. By way of entailment, we are thus led to the position that virtues of traditional ethics, so viewed, can nothing but rational and hence male. The ears of traditional Western ethicists have been attuned to male rather than female ethical voices. Right from the Plato, Aristotle, Augustine, Aquinas, Rousseau, Kant, Hegel and even philosophers like Freud, Schopenhauer, Nietzsche and many others the dualism between reason-passion or mind and body started gaining prominence. These, according to them, have either intentionally excluded women or else have used certain male moral experience as the standard for all moral experiences, thereby resulting once again in exclusion of women’s experiences. Men are identified with rationality and hence contrasted with women whose sphere is believed to be that of emotion and feeling. This act of exclusion of women’s experience from moral discourse has given birth to a tradition that emphasizes reason over emotion, universal over the particular, and justice over caring. That patriarchy’s use of gender distinctions in the realm of Ethics has resulted in gender discriminations is an undeniable fact. Hence women’s moral agency is said to have often been denied, not simply by the act of exclusion of women from moral debate or sheer ignorance of their contributions, but through philosophical claims to the effect that women lack moral reason. Traditional or mainstream ethics cannot justify its claim for universality, objectivity and gender neutrality the standards from which were drawn the legitimacy of the various moral maxims or principles of it. Right from the Platonic and Aristotelian period the dualism between reason-passion or mind and body started gaining prominence. Men are identified with rationality and hence contrasted with women whose sphere is believed to be that of emotion and feeling. Through the Association of the masculine values with reason (the feminine with irrational), was created the standard prototype of moral virtues The feminists’ critique of the traditional mainstream Ethics is based on this charge that because of its inherent gender bias, in the name of gender distinctions, Ethics has so far been justifying discriminations. In this paper, attempt would make upon the gender biased-ness of traditional ethics. But Feminists are committed to develop a critique showing how the traditional ethics together with traditional philosophy, irrespective of the claim for gender neutrality, all throughout remained gender-biased. We would try to show to what extent traditional ethics is male centered and consequentially fails to justify its claims for universality and gender neutrality.

Keywords: ethics, gender, male-centered, traditional

Procedia PDF Downloads 401
108 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste

Authors: Rajeev Ravindran, Amit K. Jaiswal

Abstract:

Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.

Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound

Procedia PDF Downloads 344
107 The Territorial Expression of Religious Identity: A Case Study of Catholic Communities

Authors: Margarida Franca

Abstract:

The influence of the ‘cultural turn’ movement and the consequent deconstruction of scientific thought allowed geography and other social sciences to open or deepen their studies based on the analysis of multiple identities, on singularities, on what is particular or what marks the difference between individuals. In the context of postmodernity, the geography of religion has gained a favorable scientific, thematic and methodological focus for the qualitative and subjective interpretation of various religious identities, sacred places, territories of belonging, religious communities, among others. In the context of ‘late modernity’ or ‘net modernity’, sacred places and the definition of a network of sacred territories allow believers to attain the ‘ontological security’. The integration on a religious group or a local community, particularly a religious community, allows human beings to achieve a sense of belonging, familiarity or solidarity and to overcome, in part, some of the risks or fears that society has discovered. The importance of sacred places comes not only from their inherent characteristics (eg transcendent, mystical and mythical, respect, intimacy and abnegation), but also from the possibility of adding and integrating members of the same community, creating bonds of belonging, reference and individual and collective memory. In addition, the formation of different networks of sacred places, with multiple scales and dimensions, allows the human being to identify and structure his times and spaces of daily life. Thus, each individual, due to his unique identity and life and religious paths, creates his own network of sacred places. The territorial expression of religious identity allows to draw a variable and unique geography of sacred places. Through the case study of the practicing Catholic population in the diocese of Coimbra (Portugal), the aim is to study the territorial expression of the religious identity of the different local communities of this city. Through a survey of six parishes in the city, we sought to identify which factors, qualitative or not, define the different territorial expressions on a local, national and international scale, with emphasis on the socioeconomic profile of the population, the religious path of the believers, the religious group they belong to and the external interferences, religious or not. The analysis of these factors allows us to categorize the communities of the city of Coimbra and, for each typology or category, to identify the specific elements that unite the believers to the sacred places, the networks and religious territories that structure the religious practice and experience and also the non-representational landscape that unifies and creates memory. We conclude that an apparently homogeneous group, the Catholic community, incorporates multitemporalities and multiterritorialities that are necessary to understand the history and geography of a whole country and of the Catholic communities in particular.

Keywords: geography of religion, sacred places, territoriality, Catholic Church

Procedia PDF Downloads 300
106 Oncolytic H-1 Parvovirus Entry in Cancer Cells through Clathrin-Mediated Endocytosis

Authors: T. Ferreira, A. Kulkarni, C. Bretscher, K. Richter, M. Ehrlich, A. Marchini

Abstract:

H-1 protoparvovirus (H-1PV) is a virus with inherent oncolytic and oncosuppressive activities while remaining non-pathogenic in humans. H-1PV was the first oncolytic parvovirus to undergo clinical testing. Results from trials in patients with glioblastoma or pancreatic carcinoma showed an excellent safety profile and first signs of efficacy. H-1PV infection is vastly dependent on cellular factors, from cell attachment and entry to viral replication and egress. Hence, we believe that the characterisation of the parvovirus life cycle would ultimately help further improve H-1PV clinical outcome. In the present study, we explored the entry pathway of H-1PV in cervical HeLa and glioma NCH125 cancer cell lines. Electron and confocal microscopy showed viral particles associated with clathrin-coated pits and vesicles, providing the first evidence that H-1PV cell entry occurs through clathrin-mediated endocytosis. Accordingly, we observed that by blocking clathrin-mediated endocytosis with hypertonic sucrose, chlorpromazine, or pitstop 2, H-1PV transduction was markedly decreased. Accordingly, siRNA-mediated knockdown of AP2M1, which retains a crucial role in clathrin-mediated endocytosis, verified the reliance of H-1PV on this route to enter HeLa and NCH125 cancer cells. By contrast, we found no evidence of viral entry through caveolae-mediated endocytosis. Indeed, pre-treatment of cells with nystatin or methyl-β-cyclodextrin, both inhibitors of caveolae-mediated endocytosis, did not affect viral transduction levels. Unexpectedly, siRNA-mediated knockdown of caveolin-1, the main driver of caveolae-mediated endocytosis, increased H-1PV transduction, suggesting caveolin-1 is a negative modulator of H-1PV infection. We also show that H-1PV entry is dependent on dynamin, a protein responsible for mediating the scission of vesicle neck and promoting further internalisation. Furthermore, since dynamin inhibition almost completely abolished H-1PV infection, makes it unlikely that H-1PV uses macropinocytosis as an alternative pathway to enter cells. After viral internalisation, H-1PV passes through early to late endosomes as observed by confocal microscopy. Inside these endocytic compartments, the acidic environment proved to be crucial for a productive infection. Inhibition of acidification of pH dramatically reduced H-1PV transduction. Besides, a fraction of H-1PV particles was observed inside LAMP1-positive lysosomes, most likely following a non-infectious route. To the author's best knowledge, this is the first study to characterise the cell entry pathways of H-1PV. Along these lines, this work will further contribute to understand H-1PV oncolytic properties as well as to improve its clinical potential in cancer virotherapy.

Keywords: clathrin-mediated endocytosis, H-1 parvovirus, oncolytic virus, virus entry

Procedia PDF Downloads 127
105 Convectory Policing-Reconciling Historic and Contemporary Models of Police Service Delivery

Authors: Mark Jackson

Abstract:

Description: This paper is based on an theoretical analysis of the efficacy of the dominant model of policing in western jurisdictions. Those results are then compared with a similar analysis of a traditional reactive model. It is found that neither model provides for optimal delivery of services. Instead optimal service can be achieved by a synchronous hybrid model, termed the Convectory Policing approach. Methodology and Findings: For over three decades problem oriented policing (PO) has been the dominant model for western police agencies. Initially based on the work of Goldstein during the 1970s the problem oriented framework has spawned endless variants and approaches, most of which embrace a problem solving rather than a reactive approach to policing. This has included the Area Policing Concept (APC) applied in many smaller jurisdictions in the USA, the Scaled Response Policing Model (SRPM) currently under trial in Western Australia and the Proactive Pre-Response Approach (PPRA) which has also seen some success. All of these, in some way or another, are largely based on a model that eschews a traditional reactive model of policing. Convectory Policing (CP) is an alternative model which challenges the underpinning assumptions which have seen proliferation of the PO approach in the last three decades and commences by questioning the economics on which PO is based. It is argued that in essence, the PO relies on an unstated, and often unrecognised assumption that resources will be available to meet demand for policing services, while at the same time maintaining the capacity to deploy staff to develop solutions to the problems which were ultimately manifested in those same calls for service. The CP model relies on the observations from a numerous western jurisdictions to challenge the validity of that underpinning assumption, particularly in fiscally tight environment. In deploying staff to pursue and develop solutions to underpinning problems, there is clearly an opportunity cost. Those same staff cannot be allocated to alternative duties while engaged in a problem solution role. At the same time, resources in use responding to calls for service are unavailable, while committed to that role, to pursue solutions to the problems giving rise to those same calls for service. The two approaches, reactive and PO are therefore dichotomous. One cannot be optimised while the other is being pursued. Convectory Policing is a pragmatic response to the schism between the competing traditional and contemporary models. If it is not possible to serve either model with any real rigour, it becomes necessary to taper an approach to deliver specific outcomes against which success or otherwise might be measured. CP proposes that a structured roster-driven approach to calls for service, combined with the application of what is termed a resource-effect response capacity has the potential to resolve the inherent conflict between traditional and models of policing and the expectations of the community in terms of community policing based problem solving models.

Keywords: policing, reactive, proactive, models, efficacy

Procedia PDF Downloads 459
104 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model

Authors: T. Thein, S. Kalyar Myo

Abstract:

Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.

Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)

Procedia PDF Downloads 264
103 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 94
102 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 383
101 Bad Juju: The Translation of the African Zombi to Nigerian and Western Screens

Authors: Randall Gray Underwood

Abstract:

Within the past few decades, zombie cinema has evolved from a niche outgrowth of the horror genre into one of the most widely-discussed and thoroughly-analyzed subgenres of film. Rising to international popularity during the 1970s and 1980s following the release of George Romero’s landmark classic, Night of the Living Dead (1968), and its much-imitated sequel, Dawn of the Dead (1978), the zombie genre returned to global screens in full force at the turn of the century following earth-shattering events such as the 9/11 terrorist attacks, America’s subsequent war in the Middle East, environmental pandemics, and the emergence of a divided and disconnected global populace in the age of social media. Indeed, the presence of the zombie in all manner of art and entertainment—movies, literature, television, video games, comic books, and more—has become nothing short of pervasive, engendering a plethora of scholarly writings, books, opinion pieces, and video essays from all manner of academics, cultural commentators, critics, and casual fans, with each espousing their own theories regarding the zombie’s allegorical and symbolic value within global fiction. Consequently, the walking dead of recent years have been variously positioned as fictive manifestations of human fears of societal collapse, environmental contagion, sexually-transmitted disease, primal regression, dwindling population rates, global terrorism, and the foreign “Other”. Less commonly analyzed within film scholarship, however, is the connection between the zombie’s folkloric roots and native African/Haitian spiritual practice; specifically, how this connection impacts the zombie’s presentation in African films by native storytellers versus in similar narratives told from a western perspective. This work will examine the unlikely connections and contrasts inherent the portrayal of the traditional African/Haitian zombie (or zombi, in Haitian French) in the Nollywood film Witchdoctor of the Livingdead (1985, Charles Abi Enonchong) versus its depiction in the early Hollywood films White Zombie (1932, Victor Halperin) and I Walked with a Zombie (1943, Jacques Tourneur), through analysis of each cinemas’ use of the zombie as a visual metaphor for subjugation/slavery, as well as differences in their representation of the the spiritual folklore from which the figure of the zombie originates. Select films from the post-Night of the Living Dead zombie cinema landscape will also warrant brief discussion in relation to Witchdoctor of the Livingdead.

Keywords: Nollywood, Zombie cinema, Horror cinema, Classical Hollywood

Procedia PDF Downloads 40
100 Cross Carpeting in Nigerian Politics: Some Legal and Moral Issues Generated

Authors: Agbana Olaseinde Julius, Opadere Olaolu Stephen

Abstract:

The concept of cross carpeting is as old as politics itself. Basically, it entails an individual leaving a political party/group, to join another. The reasons for which cross carpeting is embarked upon are diverse: ideological differences; ethnic and/or religious differences; access to actual or perceived better political opportunities; liberty of association; rancor; etc. The current democratic dispensation in Nigeria has experienced renewed and rather alarming rate of cross carpeting, for reasons including those enumerated above and others. Right to cross carpet is inherent in a democratic setting as well as the political stakeholder; so does it also comprise of the constitutional right of ‘freedom of association’. However, the current species of cross carpeting in Nigeria requires scrutiny, in view of some potential legal and moral challenges it poses for both the present and the future. Cross carpeting is considered both legal and constitutional, but the current spate raises the question of expediency, particularly in a nascent democracy. It is considered to have a propensity of negatively impacting political stability in a polity with fragile nerves. Importantly too, cross carpeting is considered a potential damage to the psyche of posterity with regards to a warped disposition to promises, honour and integrity. The perceived peculiar dimension of cross carpeting in Nigeria raises questions on the quality of leadership presently obtainable in the country, vis-à-vis greed, self-centeredness, disregard for the concern and interest of avowed followers/fans, entrenchment of distrust, etc. Thus, the study made use of primary and secondary sources of information. The primary sources included the Constitutions of the Federal Republic of Nigeria 1999 (as amended); judicial decisions; and the Electoral Act, 2010 (as Amended). The secondary sources comprised of information from books, journals, newspapers, magazines and Internet documents. Data obtained from these sources were subjected to content analysis. Findings of this study show that though the act of cross carpeting may not be in breach of any Statute or Law, it however, in most cases, breaches the morals of expediency. The morality thereof is far from justifiable, and should be condemned in the interest of the present and posterity. There is a great and urgent need to embark on a re-entrenchment of the culture of political ideology in the Nigerian polity, as obtainable in developed democracies. In conclusion, the need to exercise the right of cross carpeting with caution cannot be overemphasized. Membership of a political group/party should be backed by commitment to well defined ideologies and values. Commitment to them should be regarded akin to that found in the family, which is not easily or flippantly jettisoned.

Keywords: cross-carpeting, Nigeria, legal, moral issues, politics

Procedia PDF Downloads 425
99 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 134